Top 10 storage technologies over the last decade

Aurora's Perception or I Schrive When I See Technology by Wonderlane (cc) (from Flickr)
Aurora's Perception or I Schrive When I See Technology by Wonderlane (cc) (from Flickr)

Some of these technologies were in development prior to 2000, some were available in other domains but not in storage, and some were in a few subsystems but had yet to become popular as they are today.  In no particular order here are my top 10 storage technologies for the decade:

  1. NAND based SSDs – DRAM and other technology solid state drives (SSDs) were available last century but over the last decade NAND Flash based devices have dominated SSD technology and have altered the storage industry forever more.  Today, it’s nigh impossible to find enterprise class storage that doesn’t support NAND SSDs.
  2. GMR head– Giant Magneto Resistance disk heads have become common place over the last decade and have allowed disk drive manufacturers to double data density every 18-24 months.  Now GMR heads are starting to transition over to tape storage and will enable that technology to increase data density dramatically
  3. Data DeduplicationDeduplication technologies emerged over the last decade as a complement to higher density disk drives as a means to more efficiently backup data.  Deduplication technology can be found in many different forms today, ranging from file and block storage systems, backup storage systems, to backup software only solutions.
  4. Thin provisioning – No one would argue that thin provisioning emerged last century but it took the last decade to really find its place in the storage pantheon.  One almost cannot find a data center class storage device that does not support thin provisioning today.
  5. Scale-out storage – Last century if you wanted to get higher IOPS from a storage subsystem you could add cache or disk drives but at some point you hit a subsystem performance wall.  With scale-out storage, one can now add more processing elements to a storage system cluster without having to replace the controller to obtain more IO processing power.  The link reference talks about the use of commodity hardware to provide added performance but scale-out storage can also be done with non-commodity hardware (see Hitachi’s VSP vs. VMAX).
  6. Storage virtualizationserver virtualization has taken off as the dominant data center paradigm over the last decade but a counterpart to this in storage has also become more viable as well.  Storage virtualization was originally used to migrate data from old subsystems to new storage but today can be used to manage and migrate data over PBs of physical storage dynamically optimizing data placement for cost and/or performance.
  7. LTO tape When IBM dominated IT in the mid to late last century, the tape format dejour always matched IBM’s tape technology.  As the decade dawned, IBM was no longer the dominant player and tape technology was starting to diverge into a babble of differing formats.  As a result, IBM, Quantum, and HP put their technology together and created a standard tape format, called LTO, which has become the new dominant tape format for the data center.
  8. Cloud storage Unclear just when over the last decade cloud storage emerged but it seemed to be a supplement to cloud computing that also appeared this past decade.  Storage service providers had existed earlier but due to bandwidth limitations and storage costs didn’t survive the dotcom bubble. But over this past decade both bandwidth and storage costs have come down considerably and cloud storage has now become a viable technological solution to many data center issues.
  9. iSCSI SCSI has taken on many forms over the last couple of decades but iSCSI has the altered the dominant block storage paradigm from a single, pure FC based SAN to a plurality of technologies.  Nowadays, SMB shops can have block storage without the cost and complexity of FC SANs over the LAN networking technology they already use.
  10. FCoEOne could argue that this technology is still maturing today but once again SCSI has taken opened up another way to access storage. FCoE has the potential to offer all the robustness and performance of FC SANs over data center Ethernet hardware simplifying and unifying data center networking onto one technology.

No doubt others would differ on their top 10 storage technologies over the last decade but I strived to find technologies that significantly changed data storage that existed in 2000 vs. today.  These 10 seemed to me to fit the bill better than most.

Comments?

The future of data storage is MRAM

Core Memory by teclasorg
Core Memory by teclasorg

We have been discussing NAND technology for quite awhile now but this month I ran across an article in IEEE Spectrum titled “a SPIN to REMEMBER – Spintronic memories to revolutionize data storage“. The article discussed a form of magneto-resistive random access memory or MRAM that uses quantum mechanical spin effects or spintronics to record data. We have talked about MRAM technology before and progress has been made since then.

Many in the industry will recall that current GMR (Giant Magneto-resistance) heads and TMR (Tunnel magneto-resistance) next generation disk read heads already make use of spintronics to detect magnetized bit values in disk media. GMR heads detect bit values on media by changing its electrical resistance.

Spintronics however can also be used to record data as well as read it. These capabilities are being exploited in MRAM technology which uses a ferro-magnetic material to record data in magnetic spin alignment – spin UP, means 0; spin down, means 1 (or vice versa).

The technologists claim that when MRAM reaches its full potential it could conceivably replace DRAM, SRAM, NAND, and hard disk drives or all current electrical and magnetic data storage. Some of MRAM’s advantages include unlimited write passes, fast reads and writes and data non-volatilility.

MRAM reminds me of old fashioned magnetic core memory (in photo above) which used magnetic polarity to record non-volatile data bits. Core was a memory mainstay in the early years of computing before the advent of semi-conductor devices like DRAM.

Back to future – MRAM

However, the problems with MRAM today are that it is low-density, takes lots of power and is very expensive. But technologists are working on all these problems with the view that the future of data storage will be MRAM. In fact, researchers at the North Carolina State University (NCSU) Electrical Engineering department have been having some success with reducing power requirements and increasing density.

As for data density NCSU researchers now believe they can record data in cells approximating 20 nm across, better than current bit patterned media which is the next generation disk recording media. However reading data out of such a small cell will prove to be difficult and may require a separate read head on top of each cell. The fact that all of this is created with normal silicon fabrication methods make doing so at least feasible but the added chip costs may be hard to justify.

Regarding high power, their most recent design records data by electronically controlling the magnetism of a cell. They are using dilute magnetic semiconductor material doped with gallium maganese which can hold spin value alignment (see the article for more information). They are also using a semiconductor p-n junction on top of the MRAM cell. Apparently at the p-n junction they can control the magnetization of the MRAM cells by applying -5 volts or removing this. Today the magnetization is temporary but they are also working on solutions for this as well.

NCSU researchers would be the first to admit that none of this is ready for prime time and they have yet to demonstrate in the lab a MRAM memory device with 20nm cells, but the feeling is it’s all just a matter of time and lot’s of research.

Fortunately, NCSU has lots of help. It seems Freescale, Honeywell, IBM, Toshiba and Micron are also looking into MRAM technology and its applications.

—–

Let’s see, using electron spin alignment in a magnetic medium to record data bits, needs a read head to read out the spin values – couldn’t something like this be used in some sort of next generation disk drive that uses the ferromagnetic material as a recording medium. Hey, aren’t disks already using a ferromagnetic material for recording media? Could MRAM be fabricated/layed down as a form of magnetic disk media?? Maybe there’s life in disks yet….

What do you think?

Micron’s new P300 SSD and SSD longevity

Micron P300 (c) 2010 Micron Technology
Micron P300 (c) 2010 Micron Technology

Micron just announced a new SSD drive based on their 34nm SLC NAND technology with some pretty impressive performance numbers.  They used an independent organization, Calypso SSD testing, to supply the performance numbers:

  • Random Read 44,000 IO/sec
  • Random Writes 16,000 IO/sec
  • Sequential Read 360MB/sec
  • Sequential Write 255MB/sec

Even more impressive considering this performance was generated using SATA 6Gb/s and measuring after reaching “SNIA test specification – steady state” (see my post on SNIA’s new SSD performance test specification).

The new SATA 6Gb/s interface is a bit of a gamble but one can always use an interposer to support FC or SAS interfaces.  In addition,today many storage subsystems already support SATA drives so its interface may not even be an issue.  The P300 can easily support 3Gb/s SATA if that’s whats available and sequential performance suffers but random IOPs won’t be too impacted by interface speed.

The advantages of SATA 6Gb/sec is that it’s a simple interface and it costs less to implement than SAS or FC.  The downside is the loss of performance until 6Gb/sec SATA takes over enterprise storage.

P300’s SSD longevity

I have done many posts discussing SSDs and their longevity or write endurance but this is the first time I have heard any vendor describe drive longevity using “total bytes written” to a drive. Presumably this is a new SSD write endurance standard coming out of JEDEC but I was unable to find any reference to the standard definition.

In any case, the P300 comes in 50GB, 100GB and 200GB capacities and the 200GB drive has a “total bytes written” to the drive capability of 3.5PB with the smaller versions having proportionally lower longevity specs. For the 200GB drive, it’s almost 5 years of 10 complete full drive writes a day, every day of the year.  This seems enough from my perspective to put any SSD longevity considerations to rest.  Although at 255MB/sec sequential writes, the P300 can actually sustain ~10X that rate per day – assuming you never read any data back??

I am sure over provisioning, wear leveling and other techniques were used to attain this longevity. Nonetheless, whatever they did, the SSD market could use more of it.  At this level of SSD longevity the P300 could almost be used in a backup dedupe appliance, if there was need for the performance.

You may recall that Micron and Intel have a joint venture to produce NAND chips.  But the joint venture doesn’t include applications of their NAND technology.  This is why Intel has their own SSD products and why Micron has started to introduce their own products as well.

—–

So which would you rather see for an SSD longevity specification:

  • Drive MTBF
  • Total bytes written to the drive,
  • Total number of Programl/Erase cycles, or
  • Total drive lifetime, based on some (undefined) predicted write rate per day?

Personally I like total bytes written because it defines the drive reliability in terms everyone can readily understand but what do you think?

SPECsfs2008 CIFS ORT performance – chart of the month

(c) 2010 Silverton Consulting Inc., All Rights Reserved
(c) 2010 Silverton Consulting Inc., All Rights Reserved

The above chart on SPECsfs(R) 2008 results was discussed in our latest performance dispatch that went out to SCI’s newsletter subscribers last month.  We have described Apple’s great CIFS ORT performance in previous newsletters but here I would like to talk about NetApp’s CIFS ORT results.

NetApp had three new CIFS submissions published this past quarter, all using the FAS3140 system but with varying drive counts/types and Flash Cache installed.  Recall that Flash Cache used to be known as PAM-II and is an onboard system card which holds 256GB of NAND memory used as an extension of system cache.  This differs substantially from using NAND in a SSD as a separate tier of storage as many other vendors currently do.  The newly benchmarked NetApp systems included:

  • FAS3140 (FCAL disks with Flash Cache) – used 56-15Krpm FC disk drives with 512GB of Flash Cache (2-cards)
  • FAS3140 (SATA disks with Flash Cache) – used 96-7.2Krpm SATA disk drives with 512GB of Flash Cache
  • FAS3140 (FCAL disks) – used 242-15Krpm FC disk drives and had no Flash Cache whatsoever

If I had to guess the point of this exercise was to show that one can offset fast performing hard disk drives by using FlashCache and significantly less (~1/4) fast disk drives or by using Flash Cache and somewhat more SATA drives.  In another chart from our newsletter one could see that all three systems resulted in very similar CIFS throughput results (CIFS Ops/Sec.), but in CIFS ORT (see above), the differences between the 3 systems are much more pronounced.

Why does Flash help CIFS ORT?

As one can see, the best CIFS ORT performance of the three came from the FAS3140 with FCAL disks and Flash Cache which managed a response time of ~1.25 msec.  The next best performer was the FAS3140 with SATA disks and Flash Cache with a CIFS ORT of just under ~1.48 msec.  The relatively worst performer of the bunch was the FAS3140 with only FCAL disks which came in at ~1.84 msec. CIFS ORT.  So why the different ORT performance?

Mostly the better performance is due to the increased cache available in the Flash Cache systems.  If one were to look at the SPECsfs 2008 workload one would find that less than 30% is read and write data activity and the rest is what one might call meta-data requests (query path info @21.5%, query file info @12.9%, create = close @9.7%, etc.).  While read data may not be very cache friendly, most of the meta-data and all the write activity are cache friendly.  Meta-data activity is more cache friendly primarily because it’s relatively small in size and any write data goes to cache before being destaged to disk.  As such, this more cache friendly workload generates on average, better response times when one has larger amounts of cache.

For proof one need look no further than the relative ORT performance of the FAS3140 with SATA and Flash vs. the FAS3140 with just FCAL disks.  The Flash Cache/SATA drive system had ~25% better ORT results than the FCAL only system even with significantly slower and much fewer disk drives.

The full SPECsfs 2008 performance report will go up on SCI’s website later this month in our dispatches directory.  However, if you are interested in receiving this report now and future copies when published, just subscribe by email to our free newsletter and we will email the report to you now.

Save the planet – buy fatter disks and flash

Hard drive capacity overt time (from commons.wikimedia.org) (cc)
PC hard drive capacity over time (from commons.wikimedia.org) (cc)

Well maybe that overstates the case but there is no denying that both fatter (higher capacity) drives and flash memory (used as cache or in SSDs) saves energy in today’s data center.  The interesting thing is that the trend to higher capacity drives has been going on for decades now (see chart) but only within the last few years has been given any credit for energy reduction.  In contrast, flash in SSDs and cache is a relative newcomer but saves energy nonetheless.

I almost can’t recall when disk drives weren’t doubling in capacity every 18 to 24 months.  The above chart only shows PC drives capacities over time but enterprise drives have followed a similar curve.  The coming hard drive capacity wall may slow things down in the future but just last week IBM announced they were moving from a 300GB to a 600GB 15Krpm enterprise class disk drive in their DS8700 subsystem.  While doubling capacity may not quite halve energy use, it’s still significant.   Such energy reductions are even more dramatic with slower, higher density disks. These SATA disks are moving from 1TB to 2TB later this year and should cut energy use considerably.

Similarly, NAND flash density used in SSDs is increasing capacity at almost a faster rate than disk storage.  ASIC feature size continues to shrink and as such, more and more flash storage is packed onto the same die size.  Improvements like these are doubling the capacity of SSDs and flash memory.  While SSD power reduction due to density improvements may not be as significant as disk, we hope to see a flattening out of power use per NAND cell over time.  This flattening out of power use is now happening with processing chips and we see little reason why similar techniques couldn’t apply to NAND.

But the story with flash/SSDs is a bit more complicated:

  • SSDs don’t consume as much energy as a standard disk drive at the same capacity, so a 146GB enterprise class SSD should consume much less energy than a 146GB enterprise class disk drive.
  • SSDs don’t exhibit the significant energy spike that hard disk drives encounter when driven at higher IOPs and was discussed in SSDs vs. Drives energy use.
  • SSDs can often replace many more disk spindles than pure capacity equivalence would dictate.  Some data centers use more disks than necessary to spread workload performance over more spindles wasting storage, power and cooling.  Moving this data to SSDs or adding flash cache to a subsystem, spindle counts can be reduced dramatically and as such, slash energy use for storage.

All this says that using SSDs or flash in place of disk drives reduces data center power requirements.  So if you’re interested in saving energy and thus, helping to save the planet, buy fat(ter) disks and flash for your data storage needs.

Brought to you on behalf of Planet Earth in honor of Earth Day.

WD’s new SiliconEdge Blue SSD data write spec

Western Digital's Silicon Edge Blue SSD SATA drive (from their website)
Western Digital's SiliconEdge Blue SSD SATA drive (from their website)

Western Digital (WD) announced their first SSD drive for the desktop/laptop market space today.  Their drive offers the typical256, 128, and 64GB capacity points over a SATA interface.  Performance looks ok at 5K random read or write IO/s with sustained transfers at 250 and 140MB/s for read and write respectively.  But what caught my eye was a new specification I hadn’t seen before indicating Maximum GB written per day of 17.5, 35 and 70GB/d for their drives using WD’s Operational Lifespan – LifeEST(tm) definition.

I couldn’t find anywhere that said which NAND technology was used in the device but it likely uses MLC NAND.  In a prior posting we discussed a Toshiba study that said a “typical” laptop user writes about 2.4GB/d and a “heavy” laptop user writes about 9.2GB/d.  This data would indicate that WD’s new 64GB drive can handle almost 2X the defined “heavy” user workload for laptops and their other drives would handle it just fine.  A data write rate for desktop work, as far as I can tell, has not been published, but presumably it would be greater than laptop users.

From my perspective more information on the drives underlying NAND technology, on what a LifeEST specification actually means, and a specification as to how much NAND storage was actually present would be nice, but these are all personal nits.  All that aside, I applaud WD for standing up and saying what data write rate their drives can support.  This needs to be a standard part of any SSD specification sheet and I look forward to seeing more information like this coming from other vendors as well.

Intel-Micron new 25nm/8GB MLC NAND chip

intel_and_micron_in_25nm_nand_technology
intel_and_micron_in_25nm_nand_technology

Intel-Micron Flash Technologies just issued another increase in NAND density. This one’s manages to put 8GB on a single chip with MLC(2) technology in a 167mm square package or roughly a half inch per side.

You may recall that Intel-Micron Flash Technologies (IMFT) is a joint venture between Intel and Micron to develop NAND technology chips. IMFT chips can be used by any vendor and typically show up in Intel SSDs as well as other vendor systems. MLC technology is more suitable for use in consumer applications but at these densities it’s starting to make sense for use by data centers as well. We have written before about MLC NAND used in the enterprise disk by STEC and Toshiba’s MLC SSDs. But in essence MLC NAND reliability and endurability will ultimately determine its place in the enterprise.

But at these densities, you can just throw more capacity at the problem to mask MLC endurance concerns. For example, with this latest chip, one could conceivably have a single layer 2.5″ configuration with almost 200GBs of MLC NAND. If you wanted to configure this as 128GB SSD you could use the additional 72GB of NAND for failing pages. Doing this could conceivably add more than 50% to the life of an SSD.

SLC still has better (~10X) endurance but being able to ship 2X the capacity in the same footprint can help.  Of course, MLC and SLC NAND can be combined in a hybrid device to give some approximation of SLC reliability at MLC costs.

IMFT made no mention of SLC NAND chips at the 25nm technology node but presumably this will be forthcoming shortly.  As such, if we assume the technology can support a 4GB SLC NAND in a 167mm**2 chip it should be of significant interest to most enterprise SSD vendors.

A couple of things missing from yesterday’s IMFT press release, namely

  • read/write performance specifications for the NAND chip
  • write endurance specifications for the NAND chip

SSD performance is normally a function of all the technology that surrounds the NAND chip but it all starts with the chip.  Also, MLC used to be capable of 10,000 write/erase cycles and SLC was capable of 100,000 w/e cycles but most recent technology from Toshiba (presumably 34nm technology) shows a MLC NAND write/erase endurance of only 1400 cycles.  Which seems to imply that as the NAND technology increases density write endurance rates degrade. How much is subject to much debate and with the lack of any standardized w/e endurance specifications and reporting, it’s hard to see how bad it gets.

The bottom line, capacity is great but we need to know w/e endurance to really see where this new technology fits.  Ultimately, if endurance degrades significantly such NAND technology will only be suitable for consumer products.  Of course at ~10X (just guessing) the size of the enterprise market maybe that’s ok.

7 grand challenges for the next storage century

Clock tower (4) by TJ Morris (cc) (from flickr)
Clock tower (4) by TJ Morris (cc) (from flickr)

I saw a recent IEEE Spectrum article on engineering’s grand challenges for the next century and thought something similar should be done for data storage. So this is a start:

  • Replace magnetic storage – most predictions show that magnetic disk storage has another 25 years and magnetic tape another decade after that before they run out of steam. Such end-dates have been wrong before but it is unlikely that we will be using disk or tape 50 years from now. Some sort of solid state device seems most probable as the next evolution of storage. I doubt this will be NAND considering its write endurance and other long-term reliability issues but if such issues could be re-solved maybe it could replace magnetic storage.
  • 1000 year storage – paper can be printed today with non-acidic based ink and retain its image for over a 1000 years. Nothing in data storage today can claim much more than a 100 year longevity. The world needs data storage that lasts much longer than 100 years.
  • Zero energy storage – today SSD/NAND and rotating magnetic media consume energy constantly in order to be accessible. Ultimately, the world needs some sort of storage that only consumes energy when read or written or such storage would provide “online access with offline power consumption”.
  • Convergent fabrics running divergent protocols – whether it’s ethernet, infiniband, FC, or something new, all fabrics should be able to handle any and all storage (and datacenter) protocols. The internet has become so ubiquitous becauset it handles just about any protocol we throw at it. We need the same or something similar for datacenter fabrics.
  • Securing data – securing books or paper is relatively straightforward today, just throw them in a vault/safety deposit box. Securing data seems simple but yet is not widely used today. It doesn’t have to be that way. We need better, more long lasting tools and methodology to secure our data.
  • Public data repositories – libraries exist to provide access to the output of society in the form of books, magazines, papers and other printed artifacts. No such repository exists today for data. Society would be better served if we could store and retrieve data if there were library like institutions could store data. Most of these issues are legal due to data ownership but technological issues exist here as well.
  • Associative accessed storage – Sequential and random access have been around for over half a century now. Associative storage could complement these and be another approach allowing storage to be retrieved by its content. We can kind of do this today by keywording and indexing data. Biological memory is accessed associations or linkages to other concepts, once accessed memory seem almost sequentially accessed from there. Something comparable to biological memory may be required to build more intelligent machines.

Some of these are already being pursued and yet others receive no interest today. Nonetheless, I believe they all deserve investigation, if storage is to continue to serve its primary role to society, as a long term storehouse for society’s culture, thoughts and deeds.

Comments?