Flash’s only at 5% of data storage

7707062406_6508dba2a4_oWe have been hearing for years that NAND flash is at price parity with disk. But at this week’s Flash Memory Summit, Darren Thomas, VP Storage BU, Micron said at his keynote that NAND only store 5% of the bits in a data center.

Darren’s session was all about how to get flash to become more than 5% of data storage and called this “crossing the chasm”. I assume the 5% is against yearly data storage shipped.

Flash’s adoption rate

Darren, said last year flash climbed from 4% to 5% of data center storage, but he made no mention on whether flash’s adoption was accelerating. According to another of Darren’s charts, flash is expected to ship ~77B Gb of storage in 2015 and should grow to about 240B Gb by 2019.

If the ratio of flash bits shipped to data centers (vs. all flash bits shipped) holds constant then Flash should be ~15% of data storage by 2019. But this assumes data storage doesn’t grow. If we assume a 10% Y/Y CAGR for data storage, then flash would represent about ~9% of overall data storage.

Data growth at 10% could be conservative. A 2012 EE Times article said2010-2015 data growth CAGR would be 32%  and IDC’s 2012 digital universe report said that between 2012 and 2020, data will double every two years, a ~44% CAGR. But both numbers could be talking about the world’s data growth, not just data center.

How to cross this chasm?

Geoffrey Moore, author of Crossing the Chasm, came up on stage as Darren discussed what he thought it would take to go beyond early adopters (visionaries) to early majority (pragmatists) and reach wider flash adoption in data center storage. (See Wikipedia article for a summary on Crossing the Chasm.)

As one example of crossing the chasm, Darren talked about the electric light bulb. At introduction it competed against candles, oil lamps, gas lamps, etc. But it was the most expensive lighting system at the time.

But when people realized that electric lights could allow you to do stuff at night and not just go to sleep, adoption took off. At that time competitors to electric bulb did provide lighting it just wasn’t that good and in fact, most people went to bed to sleep at night because the light then available was so poor.

However, the electric bulb  higher performing lighting solution opened up the night to other activities.

What needs to change in NAND flash marketing?

From Darren’s perspective the problem with flash today is that marketing and sales of flash storage are all about speed, feeds and relative pricing against disk storage. But what’s needed is to discuss the disruptive benefits of flash/NAND storage that are impossible to achieve with disk today.

What are the disruptive benefits of NAND/flash storage,  unrealizable with disk today.

  1. Real time analytics and other RT applications;
  2. More responsive mobile and data center applications;
  3. Greener, quieter, and potentially denser data center;
  4. Storage for mobile, IoT and other ruggedized application environments.

Only the first three above apply  to data centers. And none seem as significant  as opening up the night, but maybe I am missing a few.

Also the Wikipedia article cited above states that a Crossing the Chasm approach works best for disruptive or discontinuous innovations and that more continuous innovations (doesn’t cause significant behavioral change) does better with Everett Roger’s standard diffusion of innovation approaches (see Wikepedia article for more).

So is NAND flash a disruptive or continuous innovation?  Darren seems firmly in the disruptive camp today.


Photo Credit(s): 20-nanometer NAND flash chip, IntelFreePress’ photostream

Toshiba studies laptop write rates confirming SSD longevity

Toshiba's New 2.5" SSD from SSD.Toshiba.com
Toshiba's New 2.5in SSD from SSD.Toshiba.com

Today Toshiba announced a new series of SSD drives based on their 32NM MLC NAND technology. The new technology is interesting but what caught my eye was another part of their website, i.e., their SSD FAQs. We have talked about MLC NAND technology before and have discussed its inherent reliability limitations, but this is the first time I have seen some company discuss their reliability estimates so publicly. This was documented more in an IDC white paperon their site but the summary on the FAQ web page speaks to most of it.

Toshiba’s answer to the MLC write endurance question all revolves around how much data a laptop user writes per day which their study makes clear . Essentially, Toshiba assumes MLC NAND write endurance is 1,400 write/erase cycles and for their 64GB drive a user would have to write, on average, 22GB/day for 5 years before they would exceed the manufacturers warranty based on write endurance cycles alone.

Let’s see:

  • 5 years is ~1825 days
  • 22GB/day over 5 years would be over 40,000GB of data written
  • If we divide this by the 1400 MLC W/E cycle limits given above, that gives us something like 28.7 NAND pages could fail and yet still support write reliability.

Not sure what Toshiba’s MLC SSD supports for page size but it’s not unusual for SSDs to ship an additional 20% of capacity to over provision for write endurance and ECC. Given that 20% of 64GB is ~12.8GB, and it has to at least sustain ~28.7 NAND page failures, this puts Toshiba’s MLC NAND page at something like 512MB or ~4Gb which makes sense.

MLC vs, SLC write endurance from SSD.Toshiba.com
MLC vs, SLC write endurance from SSD.Toshiba.com

The not so surprising thing about this analysis is that as drive capacity goes up, write endurance concerns diminish because the amount of data that needs to be written daily goes up linearly with the capacity of the SSD. Toshiba’s latest drive announcements offer 64/128/256GB MLC SSDs for the mobile market.

Toshiba studies mobile users write activity

To come at their SSD reliability estimate from another direction, Toshiba’s laptop usage modeling study of over 237 mobile users showed the “typical” laptop user wrote an average of 2.4GB/day (with auto-save&hibernate on) and a “heavy” labtop user wrote 9.2GB/day under similar specifications. Now averages are well and good but to really put this into perspective one needs to know the workload variability. Nonetheless, their published results do put a rational upper bound on how much data typical laptop users write during a year that can then be used to compute (MLC) SSD drive reliability.

I must applaud Toshiba for publishing some of their mobile user study information to help us all better understand SSD reliability for this environment. It would have been better to see the complete study including all the statistics, when it was done, how users were selected, and it would have been really nice to see this study done by a standard’s body (say SNIA) rather than a manufacturer, but these are all personal nits.

Now, I can’t wait to see a study on write activity for the “heavy” enterprise data center environment, …

What’s happening with MRAM?

16Mb MRAM chips from Everspin
16Mb MRAM chips from Everspin

At the recent Flash Memory Summit there were a few announcements that show continued development of MRAM technology which can substitute for NAND or DRAM, has unlimited write cycles and is magnetism based. My interest in MRAM stems from its potential use as a substitute storage technology for today’s SSDs that use SLC and MLC NAND flash memory with much more limited write cycles.

MRAM has the potential to replace NAND SSD technology because of the speed of write (current prototypes write at 400Mhz or a few nanoseconds) and with the potential to go up to 1Ghz. At 400Mhz, MRAM is already much much faster than today’s NAND. And with no write limits, MRAM technology should be very appealing to most SSD vendors.

The problem with MRAM

The only problem is that current MRAM chips use 150nm chip design technology whereas today’s NAND ICs use 32nm chip design technology. All this means that current MRAM chips hold about 1/1000th the memory capacity of today’s NAND chips (16Mb MRAM from Everspin vs 16Gb NAND from multiple vendors). MRAM has to get on the same (chip) design node as NAND to make a significant play for storage intensive applications.

It’s encouraging that somebody at least is starting to manufacture MRAM chips rather than just being lab prototypes with this technology. From my perspective, it can only get better from here…