Toshiba’s New MLC NAND Flash SSDs

Toshiba has recently announced a new series of SSD’s based on MLC NAND (Yahoo Biz story). This is only the latest in a series of MLC SSDs which Toshiba has released.

Historically, MLC (multi-level cell) NAND has supported higher capacity but has been slower and less reliable than SLC (single-level cell) NAND. The capacity points supplied for the new drive (64, 128, 256, & 512GB) reflect the higher density NAND. Toshiba’s performance numbers for new drives also look appealing but are probably overkill for most desktop/notebook/netbook users

Toshiba’s reliability specifications were not listed in the Yahoo story and probably would be hard to find elsewhere (I looked on the Toshiba America website and couldn’t locate any). However the duty cycle for a desktop/notebook data drive are not that severe. So the fact that MLC can only endure ~1/10th the writes that SLC can endure is probably not much of an issue.

SNIA is working on SSD (or SSS as SNIA calls it, see SNIA SSSI forum website) reliability but have yet to publish anything externally. Unsure whether they will break out MLC vs SLC drives but it’s certainly worthy of discussion.

But the advantage of MLC NAND SSDs is that they should be 2 to 4X cheaper than SLC SSDs, depending on the number (2, 3 or 4) of bits/cell, and as such, more affordable. This advantage can be reduced by the need to over-provision the device and add more parallelism in order to improve MLC reliability and performance. But both of these facilities are becoming more commonplace and so should be relatively straight forward to support in an SSD.

The question remains, given the reliability differences, when and if MLC NAND will ever become reliable enough for enterprise class SSDs. Although many vendors make MLC NAND SSDs for the notebook/desktop market (Intel, SanDISK, Samsung, etc.), FusionIO is probably one of the few using a combination of SLC and MLC NAND for enterprise class storage (see FusionIO press release). Although calling the FusionIO device an SSD is probably a misnomer. And what FusionIO does to moderate MLC endurance issues is not clear but buffering write data to SLC NAND must certainly play some part.

Testing storage systems – Intel’s SSD fix

Intel’s latest (35nm NAND) SSD shipments were halted today because a problem was identified when modifying BIOS passwords (see IT PRO story). At least they gave a timeframe for a fix – a couple of weeks.

The real question is can products be tested sufficiently these days to insure they work in the field. Many companies today will ship product to end-user beta testers to work out the bugs before the product reaches the field. But beta-testing has got to be complemented with active product testing and validation. As such, unless you plan to get 100s or perhaps 1000s of beta testers you could have a serious problem with field deployment.

And therein lies the problem, software products are relatively cheap and easy to beta test, just set up a download site and have at it. But with hardware products beta testing actually involves sending product to end-users which costs quite a bit more $’s to support. So I understand why Intel might be having problems with field deployment.

So if you can’t beta test hardware products as easily as software – then you have to have a much more effective test process. Functional testing and validation is more of an art than a science and can cost significant $’s and more importantly, time. All of which brings us back to some form of beta testing.

Perhaps Intel could use their own employees as beta testers rotating new hardware products from one organization to another, over time to get some variability in the use of a new product. Many companies use their new product hardware extensively in their own data centers to validate functionality prior to shipment. In the case of Intel’s SSD drives these could be placed in the in-numberable servers/desktops that Intel no-doubt has throughout it’s corporation.

One can argue whether beta testing takes longer than extensive functional testing. However given today’s diverse deployments, I believe beta testing can be a more cost effective process when done well.

Intel is probably trying to figure out just what went wrong in their overall testing process today. I am sure, given their history, they will do better next time.

Digital Rosetta Stone vs 3d-Barcodes

The BBC reported today on a new way to store digital data for 1000 years coming out of Japan (BBC NEWS | Technology | ‘Rosetta stone’ offers digital lifeline). Personally, I don’t feel that silicon storage is the best answer to this problem, and “wireless” read-back may be problematic over protracted periods of time.

Something more like a 3-dimensional bar code makes a lot more sense to me. Such a recording device could easily record a lot more data than paper does today, be readable via laser scans, microscope, or other light based mechanisms, and by being a physical representation, could be manufactured out of many different materials.

It’s not to say that silicon might not be a good material, lasting for a long time. The article did not go into detail how the data was recorded but presumably this etched storage device somehow trapped a charge in a particular cell that could be read back electronically – not unlike NAND flash does today but with much better reliability. But it is unclear to me why the article states that humidity surrounding the Digital Rosetta Stone device impairs storage longevity. This seems to imply that even though the device is sealed it still can be impacted by external environmental conditions.

That’s why having a recording device that can be made up of many types of materials makes more sense to me. Such a device could conceivably be etched out of marble, ceramics, steel, or any number of other materials. Marble has lasted for millennia in Greece, Italy, and other places. Of course marble is subject to weather and acid rain. But the point is by having multiple substances that can be used to record data for long periods, all using the same recording format and read-back mechanisms we can insure that any number of them can retain data for a long time in the future. Such a 3d barcode could also be sealed in any transparent media such as glass which also has been known to last centuries.

Today 3d barcodes can be attached to a surface of a cube, but they could just as easily be attached to a plate, disk, or page. Once attached (or printed) they could easily record vast amounts of data.

In my view magnetic storage cannot last for over 50 years, electronic storage will not last over 100 years, and the only thing I know of that can last a 1000 years is some physical mechanism. 3D barcodes easily emerges as the answer to this storage problem.

Does cloud storage need backup?

I was talking with a cloud storage vendor the other day and they made an interesting comment, cloud storage doesn’t need to backup data?! They told me that they and most cloud storage providers replicate customer file data so that there is always at least two (or more) copies of customer data residing in the cloud at different sites, zones or locations. But does having multiple copies of file data eliminate the need to backup data?

Most people backup data to prevent data loss from hardware/software/system failures and from finger checks – user error. Nowadays, I backup to a external hard disk nightly for my business stuff, add some family stuff to this and backup all this up once a week to external removable media, and once a month take a full backup of all user data on my family Mac’s (photos, music, family stuff, etc.) to external removable media which is then saved offsite.

For my professional existence (30+ years) I have lost personal data from a hardware/software/system failure maybe a dozen times. These events have gotten much rarer in recent history (thank you drive vendors). But about once a month I screw something up and delete or overwrite a file I need to keep around. Most often I restore from the hard drive but occasionally use the removable media to retrieve the file.

I am probably not an exception with respect to finger checks. People make mistakes. How cloud storage providers handle restoring deleted file data for user error will be a significant determinant of service quality for most novice and all professional users.

Now in my mind there are a couple of ways cloud storage providers can deal with this problem.

  • Support data backup, NDMP, or something similar which takes a copy of the data off the cloud and manages it elsewhere. This approach has worked for the IT industry for over 50 years now and still appeals to many of us.
  • Never “really” delete file data, by this I mean that you always keep replicated copies of all data that is ever written to the cloud. How a customer accesses such “not really deleted” data is open to debate but suffice it to say some form of file versioning might work.
  • “Delay” file deletion, don’t delete a file when the user requests it, but rather wait until some external event, interval, or management policy kicks in to “actually” delete the file from the cloud. Again some form of versioning may be required to access “delay deleted” data.

Never deleting a file is probably the easiest solution to this the problem but the cloud storage bill would quickly grow out of control. Delaying file deletion is probably a better compromise but deciding which event, interval, or policy to use to trigger “actually deleting data” to free up storage space is crucial.

Luckily most people realize when they have made a finger check fairly quickly (although may be reluctant to admit it). So waiting a week, month, or quarter before actually deleting file data would work to solve with this problem. Mainframers may recall generation datasets (files) where one specified the number of generations (versions) of a file and when this limit was exceeded, the oldest version would be deleted. Also, using some space threshold trigger to delete old file versions may work, e.g., whenever the cloud gets to be 60% of capacity it starts deleting old file versions. Any or all of these could be applied to different classes of data by management policy.

Of course all of this is pretty much what a sophisticated backup package does today. Backup software retains old file data around for a defined timeframe, typically on some other media or storage than where the data is normally stored. Backup storage space/media can be reclaimed on a periodic basis such as reusing backup media every quarter or only retaining a quarters worth of data in a VTL. Backup software removes the management of file versioning from the storage vendor and places it in the hands of the backup vendor. In any case, many of the same policies for dealing with deleted file versions discussed above can apply.

Nonetheless, in my view cloud storage providers must do something to support restoration of deleted file data. File replication is a necessary and great solution to deal with hardware/software/system failures but user error is much more likely. Not supplying some method to restore files when mistakes happen is unthinkable.

Why SO/CFS, Why Now

Why all the interest in Scale-out/Cluster File Systems (SO/CFS) and why now?

Why now is probably easiest to answer, valuations are down. NetApp is migrating GX to their main platform, IBM continues to breath life in GPFS, HP buys IBRIX, and now LSI buys ONStor. It seems every day brings some new activity with scale out/cluster file system products. Interest seems to be based on the perception that SO/CFS would make a good storage backbone/infrastructure for Cloud Computing. But this takes some discussion…

What can one do with a SO/CFS.

  • As I see it SO/CFS provides a way to quickly scale out and scale up NAS system performance. This doesn’t mean that file data can be in multiple locations/sites or that files can be supplied across the WAN but file performance can be scaled independently of file storage.
  • What seems even more appealing is the amount of data/size of the file systems supported by SO/CFS systems. It seems like PBs of storage can be supported and served up as millions of files. Now that sounds like something useful to Cloud environments if one could front end it with some Cloud enabled services.

So why aren’t they taking off because low valuations signal to me they aren’t doing well. I think today few end-users need to support millions of files, PBs of data or the performance these products could sustain. Currently, their main market is the high performance computing (HPC) labs but there are only so many physic/genomic labs out there that need this much data/performance.

That’s where the cloud enters the picture. Cloud’s promise is that it can aggregate everybody’s computing and storage demand into a service offering where 1,000s of user can login from the internet and do their work. With 1,000s of users each with 1,000s files, we now start to talk in the million file range.

Ok, so if the cloud market is coming, then maybe SO/CFS’s has some lasting/broad appeal. One can see preliminary cloud services emerging today especially in backup services such as Mozy or Norton Online Backup (see Norton Online Backup) but not many cloud services exist today with general purpose/generic capabilities, Amazon notwithstanding. If the Cloud market takes time to develop, then buying into SO/CFS technology while it’s relatively cheap and early in its adoption cycle makes sense.

There are many ways to supply cloud storage. Some companies have developed their own brand new solutions here, EMC/Atmos and DataDirect Network/WOS (see DataDirect Network WOS) seem most prominent. Many others exist, toiling away to address this very same market. Which of these solutions survive/succeed in the Cloud market is an open question that will take years to answer.

Quantum OEMs esXpress VM Backup SW

Quantum announced today that they are OEMing esXpress software (from PHD Virtual) to better support VMware VM backups (see press release) . This software schedules VMware snapshots of VMs and can then transfer the VM snapshot (backup) data directly to a Quantum DXI storage device.

One free “Professional” esXpress license will ship with each DXI appliance which allows for up to 4-esXpress virtual backup appliance (VBA) virtual machines to run in a single VMware physical server. An “Enterprise” license can be purchased for $1850 which allows for up to 16-esXpress VBA virtual machines to run on a single VMware physical server. More Professional licenses can be purchased for $950 each. The free Professional license also comes with free installation services from Quantum.

Additional esXpress VBAs can be used to support more backup data throughput from a single physical server. The VBA backup activity is a scheduled process and as such, when completed the VBA can be “powered” down to save VMware server resources. Also as VBAs are just VMs they fully support VMware Vmotion, DRS, and HA capabilities that are available from VMware. However using any of these facilities to move a VBA to another physical server may require additional licensing.

The esXpress software eliminates the need for a separate VCB (VMware Consolidated Backup) proxy server and provides a direct interface to support Quantum DXI deduplicated storage for VM backups. This should simplify backup processing for VMware VMs using DXI archive storage.

Quantum also announced today a new key manager, the Scalar Key Manager for Quantum LTO tape encryption which has an integrated GUI with Quantum’s tape automation products. This allows a tape automation manager a single user interface to support tape automation and tape security/encryption. A single point of management should simplify the use of Quantum LTO tape encryption.

Chart of the Month

CIFS vs. NFS Throughput Results from SPECsfs(R) 2008 benchmarks
CIFS vs. NFS Throughput Results from SPECsfs(R) 2008 benchmarks
The chart to the left was sent out in last months SCI newsletter and shows the throughput attained by various storage systems when running the SPECsfs(R) 2008 CIFS and NFS benchmarks. The scatter-plot shows data for NAS systems that have published both NFS and CIFS SPECsfs benchmark results for the same storage system. To date (June 2009), only 5 systems have published results for both benchmarks.

The scatter plot clearly shows with a high regression coefficient (.97) that the same system can typically provide over 2.4X the throughput using CIFS as it can using NFS. My friends at SPECsfs would want me to point out that these two benchmarks are not intended to be comparable and I present this with a few caveats in the newsletter:

  • NFS operations are stateless and CIFS are stateful, the distribution of file sizes are different for the two benchmarks, the relative proportions of the respective IO workloads don’t match up exactly (CIFS has more reads and less writes than NFS), and all remaining (non-read/write) operations are completely different for each workload.
  • Most of these results come from the same vendor (Apple) and it’s implementation of NFS or CIFS target support may skew results
  • Only 5 storage systems have published results for these two benchmarks and probably in all honesty do not represent a statistically valid comparison
  • Usually host implementations of CIFS or NFS impact results such as these but for SPECsfs, the benchmark implements each protocol stack. As such, SPECsfs benchmark’s implementation of CIFS or NFS protocols may also skew results

All that being said, I believe it’s an interesting and current fact that for SPECsfs 2008 benchmarks CIFS has 2.4X the throughput of NFS.

In talking with real world customers and vendors on which is better the story seems much more mixed. I heard where one O/S had a much better implementation of the CIFS protocol and as such, customers moved to use CIFS for those systems. I haven’t seen much discussion about storage systems being better or worse on one protocol over the other but it’s certainly probable.

From my perspective any storage admin looking to configure new NAS storage should try out CIFS first to see if it performs well before trying NFS. Given my inclinations, I would probably try out both, but that’s just me.

If you are interested in seeing the full report on latest SPECsfs 2008 results please subscribe to my newsletter by emailing me at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

Otherwise the full report will be on my website sometime next week.

On Storage Benchmarks

What is it about storage benchmarks that speaks to me? Is it the fact that they always present new data on current products, that there are always some surprises, or that they always reveal another facet of storage performance.

There are some that say benchmarks have lost their way, become too politicized, and as a result, become less realistic. All these faults can and do happen but it doesn’t have to be this way. Vendors can do the right thing if enough of them are engaged and end-users can play an important part as well.

Benchmarks exist mainly to serve the end-user community, by supplying an independent, audit-able, comparison of storage subsystem performance. To make benchmarks more useful, end users can help insure that they model real-world workloads. But this only happens when end-users participate in benchmark organizations, understand benchmark workloads, and understand in detail, their own I/O workloads. Which end-users can afford to do this, especially today?

As a result, storage vendors take up the cause. They argue amongst themselves to define “realistic end-user workloads”, put some approximation out as a benchmark and tweak it over time. The more storage vendors, the better this process becomes.

When I was a manager of storage subsystem development, I hated benchmark results. Often it meant there was more work to do. Somewhere, somehow or someway we weren’t getting the right level of performance from our subsystem. Something had to change. We would end up experimenting until we convinced ourselves we were on the right track. That lasted until we exhausted that track and executed the benchmark again. It almost got to the point where I didn’t really want to know the results – almost but not quite. In the end, benchmarks caused us to create better storage, to understand the best of the storage world, and to look outside ourselves at what others could accomplish.

Is storage performance still important today? I was talking with a storage vendor a couple of months back who said that storage subsystems today perform so well that performance is no longer a major differentiator or a significant buying consideration. I immediately thought why all the interest in SSDs and 8GFC. To some extent I suppose, raw storage performance is not as much a concern today but it will never go away completely.

Consider the automobile, it’s over a century old now (see Wikipedia) and we still talk about car performance. Perhaps it’s no longer raw speed, but a car’s performance still matters to most of us. What’s happened over time is that the definition of car performance has become more differentiated, more complex – top speed is not the only metric anymore. I am convinced that similar differentiation will happen to storage performance and storage benchmarks must lead the way.

So my answer is yes, storage performance still matters and benchmarks ultimately define storage performance. It’s up to all of us to keep benchmarks evolving to match the needs of end-users.

Nowadays, I can enjoy looking at storage benchmarks and leave the hard work to others.