Flash’s only at 5% of data storage

7707062406_6508dba2a4_oWe have been hearing for years that NAND flash is at price parity with disk. But at this week’s Flash Memory Summit, Darren Thomas, VP Storage BU, Micron said at his keynote that NAND only store 5% of the bits in a data center.

Darren’s session was all about how to get flash to become more than 5% of data storage and called this “crossing the chasm”. I assume the 5% is against yearly data storage shipped.

Flash’s adoption rate

Darren, said last year flash climbed from 4% to 5% of data center storage, but he made no mention on whether flash’s adoption was accelerating. According to another of Darren’s charts, flash is expected to ship ~77B Gb of storage in 2015 and should grow to about 240B Gb by 2019.

If the ratio of flash bits shipped to data centers (vs. all flash bits shipped) holds constant then Flash should be ~15% of data storage by 2019. But this assumes data storage doesn’t grow. If we assume a 10% Y/Y CAGR for data storage, then flash would represent about ~9% of overall data storage.

Data growth at 10% could be conservative. A 2012 EE Times article said2010-2015 data growth CAGR would be 32%  and IDC’s 2012 digital universe report said that between 2012 and 2020, data will double every two years, a ~44% CAGR. But both numbers could be talking about the world’s data growth, not just data center.

How to cross this chasm?

Geoffrey Moore, author of Crossing the Chasm, came up on stage as Darren discussed what he thought it would take to go beyond early adopters (visionaries) to early majority (pragmatists) and reach wider flash adoption in data center storage. (See Wikipedia article for a summary on Crossing the Chasm.)

As one example of crossing the chasm, Darren talked about the electric light bulb. At introduction it competed against candles, oil lamps, gas lamps, etc. But it was the most expensive lighting system at the time.

But when people realized that electric lights could allow you to do stuff at night and not just go to sleep, adoption took off. At that time competitors to electric bulb did provide lighting it just wasn’t that good and in fact, most people went to bed to sleep at night because the light then available was so poor.

However, the electric bulb  higher performing lighting solution opened up the night to other activities.

What needs to change in NAND flash marketing?

From Darren’s perspective the problem with flash today is that marketing and sales of flash storage are all about speed, feeds and relative pricing against disk storage. But what’s needed is to discuss the disruptive benefits of flash/NAND storage that are impossible to achieve with disk today.

What are the disruptive benefits of NAND/flash storage,  unrealizable with disk today.

  1. Real time analytics and other RT applications;
  2. More responsive mobile and data center applications;
  3. Greener, quieter, and potentially denser data center;
  4. Storage for mobile, IoT and other ruggedized application environments.

Only the first three above apply  to data centers. And none seem as significant  as opening up the night, but maybe I am missing a few.

Also the Wikipedia article cited above states that a Crossing the Chasm approach works best for disruptive or discontinuous innovations and that more continuous innovations (doesn’t cause significant behavioral change) does better with Everett Roger’s standard diffusion of innovation approaches (see Wikepedia article for more).

So is NAND flash a disruptive or continuous innovation?  Darren seems firmly in the disruptive camp today.

Comments?

Photo Credit(s): 20-nanometer NAND flash chip, IntelFreePress’ photostream

VMworld first thoughts kickoff session

[Edited for readability. RLL] The drummer band was great at the start but we couldn’t tell if it was real or lipsynched. It turned out that each of the Big VMWORLD letters had a digital drum pad on them which meant it was live, in realtime.

Paul got a standing ovation as he left the stage introducing Pat the new CEO.  With Paul on the stage, there was much discussion of where VMware has come the last four years.  But IDC stats probably say it better than most in 2008 about 25% of Intel X86 apps were virtualized and in 2012 it’s about 60% and and Gartner says that VMware has about 80% of that activity.

Pat got up on stage and it was like nothing’s changed. VMware is still going down the path they believe is best for the world a virtual data center that spans private, on premises equipment and extrenal cloud service providers equipment.

There was much ink on software defined data center which is taking the vSphere world view and incorporating networking, more storage, more infrastructure to the already present virtualized management paradigm.

It’s a bit murky as to what’s changed, what’s acquired functionality and what’s new development but suffice it to say that VMware has been busy once again this year.

A single “monster vm” (has it’s own facebook page) now supports up to 64 vCPUs, 1TB of RAM, and can sustain more than a million IOPS. It seems that this should be enough for most mission critical apps out there today. No statement on latency the IOPS but with a million IOS a second and 64 vCPUs we are probably talking flash somewhere in the storage hierarchy.

Pat mentioned that the vRAM concept is now officially dead. And the pricing model is now based on physical CPUs and sockets. It no longer has a VM or vRAM component to it. Seemed like this got lots of applause.

There are now so many components to vCloud Suite that it’s almost hard to keep track of them all:  vCloud Director, vCloud Orchestrator, vFabric applications director, vCenter Operations Manager, of course vSphere and that’s not counting relatively recent acquisitions Dynamic Op’s a cloud dashboard and Nicira SDN services and I am probably missing some of them.

In addition to all that VMware has been working on Serengeti which is a layer added to vSphere to virtualize Hadoop clusters. In the demo they spun up and down a hadoop cluster with MapReduce operating to process log files.  (I want one of these for my home office environments).

Showed another demo of the vCloud suite in action spinning up a cloud data center and deploying applications to it in real time. Literally it took ~5minutes to start it up until they were deploying applications to it.  It was a bit hard to follow as it was going a lot into the WAN like networking environment configuration of load ballancing, firewalls and other edge security and workload characteristics but it all seemed pretty straightforward and took a short while but configured an actual cloud in minutes.

I missed the last part about social cast but apparently it builds a social network of around VMs?  [Need to listen better next time]

More to follow…

 

Enterprise data storage defined and why 3PAR?

More SNW hall servers and storage
More SNW hall servers and storage

Recent press reports about a bidding war for 3PAR bring into focus the expanding need for enterprise class data storage subsystems.  What exactly is enterprise storage?

Defining enterprise storage is frought with problems but I will take a shot.  Enterprise class data storage has:

  • Enhanced reliability, high availability and serviceability – meaning it hardly ever fails, it keeps operating (on redundant components) when it does fail, and repairing the storage when the rare failure occurs can be accomplished without disrupting ongoing storage services
  • Extreme data integrity – goes beyond just RAID storage, meaning that these systems lose data very infrequently, provide the latest data written to a location when read and will tell you when data cannot be accessed.
  • Automated I/O performance – meaning sophisticated caching algorithms that try to keep ahead of sequential I/O streams, buffer actively read data, and buffer write data in non-volatile cache before destaging to disk or other media.
  • Multiple types of storage – meaning the system supports SATA, SAS and/or FC disk drives and SSDs or Flash storage
  • PBs of storage – meaning behind one enterprise class storage (sub-)system one can support over 1PB of storage
  • Sophisticated functionality – meaning the system supports multiple forms of offsite replication, thin provisioning, storage tiering, point-in-time copies, data cloning, administration GUIs/CLIs, etc.
  • Compatibility with all enterprise O/Ss – meaning the storage has been tested and is on hardware compatibility lists for every major operating system in use by the enterprise today.

As for storage protocol, it seems best to leave this off the list.  I wanted to just add block storage, but enterprises today probably have as much if not more external file storage (CIFS or NFS) as they have block storage (FC or iSCSI).  And the proportion in file systems seems to be growing (see IDC report referenced below).

In addition, while I don’t like the non-determinism of iSCSI or file access protocols, this doesn’t seem to stop such storage from putting up pretty impressive performance numbers (see our performance dispatches).  Anything that can crack 100K I/O or file operations per second probably deserves to call themselves enterprise storage as long as they meet the other requirements.  So, maybe I should add high-performance storage to the list above.

Why the sudden interest in enterprise storage?

Enterprise storage has been around arguably since the 2nd half of last century (for mainframe systems) but lately has become even more interesting as applications deploy to the cloud and server virtualization (from VMware, Microsoft Hyper-V and others) takes over the data center.

Cloud storage and cloud computing services are lowering the entry points for storage and processing, enabling application deployments which were heretofore unaffordable.  These new cloud applications consume storage at increasing rates and don’t seem to be slowing down any time soon.  Arguably, some cloud storage is not enterprise storage but as service levels go up for these applications, providers must ultimately turn to enterprise storage.

In addition, server virtualization transforms the enterprise data center from a single application per server to easily 5 or more applications per physical server.  This trend is raising server utilization, driving more I/O, and requiring higher capacity.  Such “multi-application” storage almost always requires high availability, reliability and performance to work well, generating even more demand for enterprise data storage systems.

Despite all the demand, world wide external storage revenues dropped 12% last year according to IDC.  Now the economy had a lot to do with this decline but another factor reducing external storage revenue is the ongoing drop in the price of storage on a $/GB basis.  To this point, that same IDC report stated that external storage capacity increased 33% last year.

Why Dell & HP wants 3PAR storage?

Margins on enterprise storage are good, some would say very good.  While raw disk storage can be had at under $0.50/GB, enterprise class storage is often 10 or more times that price.  Now that has to cover redundant hardware, software/firmware engineering and other characteristics, but this still leaves pretty good margins.

In my mind, Dell would see enterprise storage as a natural extension of their current enterprise server business.  They already sell and support these customers, including enterprise class storage just adds another product to the mix.  Developing enterprise storage from scratch is probably a 4-7 year journey with the right people, buying 3PAR puts them in the market today with a competitive product.

HP is already in the enterprise storage market today, with their XP and EVA storage subsystems.  However, having their own 3PAR enterprise class storage may get them better margins than their current XP storage OEMed from HDS.  But I think Chuck Hollis’s post on HP’s counter bid for 3PAR may have revealed another side to this discussion – sometime M&A is as much about constraining your competition as it is about adding new capabilities to a company.

——

What do you think?

3.3 Exabytes-a-day?!

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

NetworkWorld announced today information from an EMC funded IDC study that said the world will create 1.2 Zettabytes (ZB, 10**21 bytes) of data in 2010. By my calculations this is 3.3 Exabytes-a-day (XB,10**18 bytes), 2.3PB (10**15 bytes) a minute or 38TB (10**12 bytes) a second.  This seems high and I have talked about how we could get here last year in my Exabyte-a-day post.  But what interested me most was a statement that about 35% more information is created than can be stored.  Not sure I understand this claim. (Deduplication perhaps?)

Aside from deduplication, what this must mean is that data is being created, sent across the Internet and not stored anywhere except while in flight to be discarded soon after.  I assume this data is associated with something like VOIP phone calls and Video chats/conferences, only some portion of which is ever recorded and stored.   (Although that will soon no longer be true for audio, see my Yottabytes by 2015 post).

But 35% would indicate ~1 out of every 3 bytes of data is discarded shortly after creation.  IDC also expects this factor to grow, not shrink and “… to over 60% over the next few years.”  So 3 out of 5 bytes of data will only be available during real-time to be discarded thereafter.

Why this portion should be growing more rapidly than data being stored is hard to fathom. Again video and voice over the internet must be a significant part of the reason.

Storing voice data

I don’t know about most people but I record only a few of my more important calls.  Also, these calls happen to be longer on average than my normal calls.  Does this mean that 35% of my call data volume is not stored, maybe.  All my business calls are done via the Internet nowadays so this data is being created and shipped across the net, used while the call is occurring but never stored other than in flight or by call participants.  So non-recorded calls easily qualifies as data created but not stored.  Even so, while I may listen to maybe ~33% of the recorded calls afterwards, I overwrite all of them ultimately, keeping only the ones that fit on the recorder’s flash device.  Hence, in the end even the voice data I do keep is only retained until I need storage to record more.

Not sure how this is treated in the IDC study but it seems to me to be yet another class of data, maybe call this transient data.  I can see similarities of transient data in company backups, log files, database dumps, etc.  Most of this data is stored for a limited time only to be later erased/recorded over in the end.  How IDC classified such data I cannot tell.

But will transient data grow?

As for video, I currently do no video conferencing so have no information on this.  But I am considering moving to another communication platform that supplies Video chat’s and which will make it less intrusive to record calls.  While demoing this new capability I have rapidly consumed over 200MB of storage for call recordings.  (I need to cap this some way before it gets out of hand).  In any case, I believe recording convenience should make such data more store-able over time, not less.

So while I may agree that 1 out of 3 bytes of data created today is not stored, I definitely don’t think that over time that ratio will grow and certainly not to 60%.  My only caveat is that there is a limit to the amount of data the world can readily store at any one time and this will ultimately drive all of us to delete data we would rather keep.

But maybe all this just points to a more interesting question, how much data does the world create that is kept for a year, a decade, or a century.  But that will need to await another post…

Are SSDs an invasive species?

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

I was reading about pythons becoming an invasive species in the Florida Everglades and that brought to mind SSDs.  The current ecological niche in data storage has rotating media as the most prolific predator with tape going on the endangered species list in many locales.

So where does SSD enter into the picture.  We have written before on SSD shipments start to take off but that was looking at the numbers from another direction. Given recent announcements it appears that in the enterprise, SSDs seem to be taking over the place formerly held by 15Krpm disk devices.  These were formerly the highest performers and most costly storage around.  But today, SSDs, as a class of storage, are easily the most costly storage and have the highest performance currently available.

The data

Seagate announced yesterday that they had shipped almost 50M disk drives last quarter up 8% from the prior quarter or ~96M drives over the past 6 months.  Now Seagate is not the only enterprise disk provider (Hitachi, Western Digital and others also supply this market) but they probably have the lion’s share.  Nonetheless, Seagate did mention that the last quarter was supply constrained and believed that the total addressible market was 160-165M disk drives.  That puts Seagate’s market share (in unit volume) at ~31% and at that rate the last 6 months total disk drive production should have been ~312M units.

In contrast, IDC reports that SSD shipments last year totaled 11m units. In both the disk and SSD cases we are not just talking enterprise class devices, the numbers include PC storage as well.  If we divide this number in half we have a comparable number of 5.5M SSDs for the last 6 months, giving SSDs less than a 2% market share (in units).

Back to the ecosystem.  In the enterprise, there are 15Krpm disks, 10Krpm disks and 7.2Krpm rotating media disks.  As speed goes down, capacity goes up.  In Seagate’s last annual report they stated that approximately 10% of the drives they manufactured were shipped to the enterprise.  Given that rate, of the 312M drives, maybe 31M were enterprise class (this probably overstates the number but usable as an upper bound).

As for SSDs, in the IDC report cited above, they mentioned two primary markets the PC and enterprise markets for SSD penetration.  In that same Seagate annual report, they said their desktop and mobile markets were around 80% of disk drives shipped.  If we use that proportion for SSDs that would say that of the 5.5M units shipped last half year, 4.4 were in the PC space and 1.1M were for the enterprise.  Given that, it would state that the enterprise class SSDs represent ~3.4% of the enterprise class disk drives shipped.  This is over 10X more than my prior estimate of SSDs being (<0.2%) of enterprise disk drives.  Reality probably lies somewhere between these two estimates.

I wrote a research report a while back which predicted that SSDs would never take off in the enterprise, I was certainly wrong then.  If these numbers are correct, capturing 10% of the enterprise disk market in little under 2 years can only mean that high-end, 15Krpm drives are losing ground faster than anticipated.  Which brings up the analogy of the invasive species.  SSDs seem to be winning a significant beach head in the enterprise market.

In the mean time, drive vendors are fighting back by moving from the 3.5″ to 2.5″ form factor, offering both 15K and 10K rpm drives.   This probably means that the 15Krpm 3.5″ drive’s days are numbered.

I made another prediction almost a decade ago that 2.5″ drives would take over the enterprise around 2005 – wrong again, but only by about 5 years or so. I got to stop making predictions, …

An Exabyte-a-day

snp microarray data by mararie (cc) (from flickr)
snp microarray data by mararie (cc) (from flickr)

At HPTechDay this week Jim Pownell, office of CTO, HP StorageWorks Division, reported on an IDC study that said this year the world is creating about an Exabyte of data each day.  An Exabyte (XB) is 10**18 bytes or 1000 PB of data.  Seems a bit high from my perspective.

Data creation by individuals

Population Growth and Income Level Chart by mattlemmon (cc) (from flickr)
Population Growth and Income Level Chart by mattlemmon (cc) (from flickr)

The US Census bureau estimates todays worldwide population at around 6.8 Billion people. Given that estimate, the XB/day number says that the average person is creating about 150MB/day.

Now I don’t know about you but we probably create that much data during our best week. That being said our family average over the last 3.5 years is more like 30.1MB/day. This average, over the last year, has been closer to 75.1MB/day (darn new digital camera).

If I take our 75.1 MB/day as a reasonable approximate average for our family and with 2 adults in our family, this would say each adult creates ~37.6MB of data per day.

Probably about 50% of todays world wide population probably has no access to create any data whatsoever. Of the remaining 50%, maybe 33% is at an age where data creation is insignificant. All this leaves about 2.3B people actively creating data at around 37.6MB/day. This would account for about 86.5PB of data creation a day.

Naturally, I would consider myself a power data creator but

  • We are not doing much with video production which takes creates gobs of data.
  • Also, my wife retains camera rights and I only take the occasional photo with my cell phone. So I wouldn’t say we are heavy into photography.

Nonetheless, 37.6MB/day on average seems exceptionally high, even for us.

Data creation by companies

However, that XB a day also accounts for corporate data generation as well as individuals. Hoovers, a US corporate database lists about 33M companies worldwide. These are probably the biggest 33M and no doubt creating lot’s of data each day.

Given the above that individuals probably account for 86.5PB/day, that leaves about ~913.5PB/day for the Hoover’s DB of 33M companies to create. By my calculations this would say each of these companies is generating about ~27.6GB/day. No doubt there are plenty of companies out there doing this each day but the average company generates 27.6GB a day?? I don’t think so.

Ok, my count of companies could be wildly off. Perhaps the 33M companies in Hoover’s DB represent only the top 20% of companies worldwide, which means that maybe there are another 132M smaller companies out there totaling 165M companies. Now the 913.5PB/day says the average company generates ~5.5GB/day. This still seems high to me, especially considering this is an average of all 165M companies world wide.

Most analysts predict data creation is growing by over 100% per year, so that XB/day number for this year will be 2XB/day next year.

Of course I have been looking at a new HD video camera for my birthday…

Sony_HDR-TG5V_Vanity350
Sony_HDR-TG5V_Vanity350