Cloud storage growth is hurting NAS & SAN storage vendors

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

My friend Alex Teu (@alexteu), from Oxygen Cloud wrote a post today about how Cloud Storage is Eating the World Alive. Alex reports that all major NAS and SAN storage vendors lost revenue this year over the previous year ranging from a ~3% loss to over a 20% loss (Q1-2014 compared to Q1-2013, from IDC).

Although an interesting development, it’s hard to say that this is the end of enterprise storage as we know it.  I believe there are a number of factors that are impacting  enterprise storage revenues and Cloud storage adoption may be only one of them.

Other trends impacting NAS & SAN storage adoption

One thing that has emerged over the last decade or so is the advance of Flash storage. Some of this is used in storage controllers to speed up IO access and some is used in servers to speed up IO access. But any speedup of IO could potentially reduce the need for high-performing disk drives and could allow customers to use higher capacity/slower disk drives instead. This could definitely reduce the cost of storage systems. A little bit of flash goes  long way to speed up IO access.

The other thing is that disk capacity is trending upward, at exponential rates. Yesterday,s 2TB disk drive is todays 4TB disk drive and we are already seeing 6TB from Seagate, HGST and others. And this is also driving down the cost of NAS and SAN storage.

Nowadays you can configure 1PB of storage with just over 170 drives. Somewhere in there you might want a couple 100TB of Flash to speed up IO access to these slow disks but Flash is also coming down in ($/GB) price (see SanDISK’s recent consumer grade TLC drive at $0.44/GB). Also the move to MLC flash has increased the capacity of flash devices, leading to less SSDs/flash cache cards to store/speed up more data.

Finally, the other trend which seems to have emerged recently is the movement away from enterprise class storage to server storage. One can see this in VMware’s VSAN, HyperConverged systems such as Nutanix and Scale Computing, as well as a general trend in Windows Server applications (SQL Server, Exchange Server, etc.) to make better use of DAS storage. So some customers are moving their data to shared DAS storage today, whereas before this was more difficult to accomplish effectively and because of that they previously purchased networked storage.

What about cloud storage?

Yes, as Alex has noted, the price of cloud storage has declined precipitously over the last year or so. Alex’s cloud storage pricing graph is shows how the entry of Microsoft and Google has seemingly forced Amazon to match their price reductions. But the other thing of note is that they have all come down to about the same basic price of $0.024/GB/Month.

It’s interesting that Amazon delayed their first S3 serious price reductions by about 4 months after Azure and Google Cloud Storage dropped there’s and then within another month after that, they all were at price parity.

What’s cloud storage real growth?

I reported last August that Microsoft Azure and Amazon S3 were respectively storing 8 trillion and over 2 trillion objects (see my Is object storage outpacing structured and unstructured data growth). This year (April 2014) Microsoft mentioned at TechEd that Azure was storing 20 Trillion object and servicing 2 million request per second.

I could find no update to Amazon S3 numbers from last year but the 10x  2.5x growth in Azure’s object count in ~8 months and the roughly doubling of request/second (In my post I didn’t mention last year they were processing 900K requests/second) say something interesting is going on in cloud storage.

I suppose Google’s cloud storage service is too new to report serious results and maybe Amazon wants to keep their growth a secret. But considering Amazon’s recent matching of Azure’s and Google’s pricing, it probably means that their growth wasn’t what they expected.

The other interesting item from the Microsoft discussions on Azure, was that they were already hosting 1M SQL databases in Azure and that 57% of Fortune 500 customers are currently using Azure.

In the “olden days”, before cloud storage, all these SQL databases and Fortune 500 data sets would have more than likely resided on NAS or SAN storage of some kind. And possibly due to the traditional storage’s higher cost and greater complexity, some of this data would never have been spun up in the first place if they had to use traditional storage, but with cloud storage so cheap, rapidly configurable and easy to use all this new data was placed in the cloud.

So I must conclude from Microsofts growth numbers and their implication for the rest of the cloud storage industry that maybe Alex was right, more data is moving to the cloud and this is impacting traditional storage revenues.  With IDC’s (2013) data growth at ~43% per year, it would seem that Microsoft’s cloud storage is growing more rapidly than the worldwide data growth, ~14X faster!

On the other hand, if cloud storage was consuming most of the world’s data growth, it would seem to precipitate the collapse of traditional storage revenues, not just a ~3-20% decline. So maybe the most new cloud storage applications would never have been implemented before if they had to use traditional storage, which means that only some of this new data would ever have been stored on traditional storage in the first place, leading to a relatively smaller decline in revenue.

One question remains: is this a short term impact or more of a long running trend that will play out over the next decade or so? From my perspective, new applications spinning up on non-traditional storage is a long running threat to traditional NAS and SAN storage which will ultimately see traditional storage relegated to a niche. How big this niche will ultimately be and how well it can be defended needs to be the subject for another post?

~~~~

Comments?

“… would consume nearly half the world’s digital storage capacity.”

A recent National Geographic article on recent research into the brain (February 2014) said something which I find intriguing. “Producing an image of an entire human brain at the same resolution [as a mouse brain] would consume nearly half of the world’s current digital storage capacity.”

They were imaging slices of a mouse brain with an electron microscope, in slices one millimeter square, at a micron in depth, representing just a thousand cubic microns per image. Such a scan of the full mouse brain would require 450,000 TB (0.45 EB, exabyte=10E18 bytes) of storage for the images.

Getting an equivalent resolution image of a single human brain would require 1.3 billion TB (or 1.3 ZB, zettabyte=10E21 bytes).  They went on to say that the world’s digital storage was just 2.7 billion TB (or 2.7 ZB), which is where they came up with the “… nearly half the world’s digital storage capacity.”

So how much digital storage is there in the world today

Setting aside the need for such a detailed map for the moment. Let’s talk about the world’s digital storage.

  • Tape – I don’t have much information about the enterprise tape capacity currently available in IBM TS1120/TS1130 or Oracle T10000C/B/A but a relatively recent article indicated that the 225 millionth LTO cartridge was shipped sometime in 3Q13 which represented a capacity of 90,000 PB (or 90 EB, exabyte=10E18 bytes) of storage capacity
  • Disk – Although I couldn’t find a reasonable estimate of installed disk capacity, IDC reported that 2012 disk capacity shipments were 20EB and through 3Q13 there had been 24.3EB shipped. It’s probably safe to assume that capacity shipments were ~8.3EB or more in 4Q13 so we have shipped ~32.5EB of disk capacity in 2013. One estimate of worldwide disk storage capacity (also provided by IDC) is that we are doubling worldwide disk storage capacity every two years so one estimate of installed disk capacity as of the end of 4Q13 is something on the order of 113.6EB of disk storage.

I won’t delve into optical storage as that’ s even more difficult to get a handle on but my guess is it’s not quite to the level of LTO digital storage so maybe another 90EB there for a total of  ~0.3ZB of digital storage in disks, LTO tape and optical.

However, back in February of 2010, researchers reported in Science that the world’s information storage capacity was 2.0 ZB of storage. Also, last October IDC reported that the US alone had a digital storage capacity of 2.6 ZB and that the US had somewhere between 24 to 40% of the world’s storage. Let’s use 33%, for simplicity sake, this would put world’s digital capacity at around 7.8ZB of storage according to IDC.

Thankfully, a human brain scan at the resolutions above would take only a sixth of the world’s digital storage based on my estimates.

But, we really need to talk about data reduction techniques

I think we need to start discussing some form of data reduction, data compression/fractal compression or even graphical encoding. For example, with appropriate software and compute power the neural scans could be encoded at appropriate levels of detail into a graphical representation. Hopefully, this should be many orders of magnitude less storage intensive. So maybe only 1/600th to 1/60,000 of all the world’s digital storage

Another approach might be to use a form of fractal compression similar to that done in motion pictures/photographic images. Perhaps, I am being naive but it seems to me that there ought to be some form of fractal encoding of neural branching. Most of nature’s branching structures have an underlying fractal basis and I see nothing in neural anatomy that would show me it’s any different.

Of course, I am not a neural biologist, but I am a storage expert and there’s got to be a way to reduce this data load somehow.

Comments?

Photo Credit: Microscopic embryonic mouse brain (DAPI, GFP) by Joseph Elsbernd