“… would consume nearly half the world’s digital storage capacity.”

A recent National Geographic article on recent research into the brain (February 2014) said something which I find intriguing. “Producing an image of an entire human brain at the same resolution [as a mouse brain] would consume nearly half of the world’s current digital storage capacity.”

They were imaging slices of a mouse brain with an electron microscope, in slices one millimeter square, at a micron in depth, representing just a thousand cubic microns per image. Such a scan of the full mouse brain would require 450,000 TB (0.45 EB, exabyte=10E18 bytes) of storage for the images.

Getting an equivalent resolution image of a single human brain would require 1.3 billion TB (or 1.3 ZB, zettabyte=10E21 bytes).  They went on to say that the world’s digital storage was just 2.7 billion TB (or 2.7 ZB), which is where they came up with the “… nearly half the world’s digital storage capacity.”

So how much digital storage is there in the world today

Setting aside the need for such a detailed map for the moment. Let’s talk about the world’s digital storage.

  • Tape – I don’t have much information about the enterprise tape capacity currently available in IBM TS1120/TS1130 or Oracle T10000C/B/A but a relatively recent article indicated that the 225 millionth LTO cartridge was shipped sometime in 3Q13 which represented a capacity of 90,000 PB (or 90 EB, exabyte=10E18 bytes) of storage capacity
  • Disk – Although I couldn’t find a reasonable estimate of installed disk capacity, IDC reported that 2012 disk capacity shipments were 20EB and through 3Q13 there had been 24.3EB shipped. It’s probably safe to assume that capacity shipments were ~8.3EB or more in 4Q13 so we have shipped ~32.5EB of disk capacity in 2013. One estimate of worldwide disk storage capacity (also provided by IDC) is that we are doubling worldwide disk storage capacity every two years so one estimate of installed disk capacity as of the end of 4Q13 is something on the order of 113.6EB of disk storage.

I won’t delve into optical storage as that’ s even more difficult to get a handle on but my guess is it’s not quite to the level of LTO digital storage so maybe another 90EB there for a total of  ~0.3ZB of digital storage in disks, LTO tape and optical.

However, back in February of 2010, researchers reported in Science that the world’s information storage capacity was 2.0 ZB of storage. Also, last October IDC reported that the US alone had a digital storage capacity of 2.6 ZB and that the US had somewhere between 24 to 40% of the world’s storage. Let’s use 33%, for simplicity sake, this would put world’s digital capacity at around 7.8ZB of storage according to IDC.

Thankfully, a human brain scan at the resolutions above would take only a sixth of the world’s digital storage based on my estimates.

But, we really need to talk about data reduction techniques

I think we need to start discussing some form of data reduction, data compression/fractal compression or even graphical encoding. For example, with appropriate software and compute power the neural scans could be encoded at appropriate levels of detail into a graphical representation. Hopefully, this should be many orders of magnitude less storage intensive. So maybe only 1/600th to 1/60,000 of all the world’s digital storage

Another approach might be to use a form of fractal compression similar to that done in motion pictures/photographic images. Perhaps, I am being naive but it seems to me that there ought to be some form of fractal encoding of neural branching. Most of nature’s branching structures have an underlying fractal basis and I see nothing in neural anatomy that would show me it’s any different.

Of course, I am not a neural biologist, but I am a storage expert and there’s got to be a way to reduce this data load somehow.

Comments?

Photo Credit: Microscopic embryonic mouse brain (DAPI, GFP) by Joseph Elsbernd

Storage changes in vSphere 5.5 announced at VMworld 2013

Pat Gelsinger, VMworld2013 Keynote, vSphere 5.5 storage changesVMworld2013 is going on in San Francisco this week. The big news is the roll out of network virtualization in NSX and vCloud Hybrid Service (vCHS) but there were a few tidbits in the storage arena worth discussing.

  • Virtual SAN public beta – VSAN was released as a public beta and customers can now download a copy of VSAN from www.vsanbeta.com. VSAN will construct a pool of storage out of local attached disks and flash across two or more hosts. It uses the flash as a read-write cache for the local disks. With VSAN customers can elect to have multiple tiers of storage be supported within a single VSAN pool, as well as support different availability (replication) levels, and some other, select characteristics. VSAN can easily scale in performance and capacity by just adding more hosts that have local storage. Now all that stranded local storage and flash server level resources can be used as a VM storage pool. VMware stated that they see VSAN as usefull for tier 2/tier 3 application storage and/or backup-archive storage uses. However they showed one chart with a View Planner application simulation using a 3-host VSAN (presumably with lots of SSD and disk storage) compared against an all-flash array (vendor unknown). In this benchmark the VSAN exactly matched the all-flash external storage in performance (VMs supported). [late update] Lot’s of debate on what VSAN means to enterprise storage but it appears to be a limited in scope and mainly focused on SMB applications.  Chad Sakac did a (real) lengthy post on EMC’s perspective on VSAN and Software Defined Storage if you want to know more check it out.
  • Virsto – VMware announced GA of Virsto which uses any external storage and creates a new global storage pool out of them. Apparently, it maps a log structured file system across the external SAN storage. By doing this it sequentializes all the random write IO coming off of ESX hosts. It supports thin provisioning, snapshot and read-write clones. One could see this as almost a write cache for VM IO activity but read IOs are also by definition spread across (extremely wide striped) across the storage pool which should improve read performance as well. You configure external storage as normal and present those LUNs to Virsto which then converts that storage pool into “vDisks” which can then be configured as VM storage. Probably more to see here but it’s available today. Before acquisition one had to install Virsto into each physical host that was going to define VMs using Virsto vDisks. It’s unclear how much Virsto has been integrated into the hypervisor but over time one would assume like VSAN this would be buried underneath the hypervisor and be available to any vSphere host.
  • vSphere Flash Read Cache – customers with PCIe flash cards and vCenter Ops Manager, can now use them to support a read cache for data access. vSphere Flash Read Cache is apparently vmotion aware such that as you move VMs from one ESX host to another the read cache buffer will move with it. Flash Read Cache is transparent to the VMs and can be assigned on a VMDK basis.
  • vSphere 5.5 low-latency support – unclear what VMware actually did but they now claim vSphere 5.5 now supports low latency applications, like FinServ apps. They claim to have reduced the “jitter” or variability in IO latency that was present in previous versions of vSphere. Presumably they shortened the IO and networking paths through the hypervisor which should help.  I suppose if you have a VMDK which ends up on an SSD storage someplace one can have a more predictable response time. But the critical question is how much overhead does the hypervisor IO path add to the base O/S. When all-flash arrays now sporting latencies under 100 µsecs, adding another 10 or 100 µsecs can make a big difference. In VMware’s quest to virtualize any and all mission critical apps, low-latency apps are one of the last bastions of physical server apps left to conquer. Consider this a step to accommodate them.
  • vVols – VMware keeps talking about vVols as an attempt to extend their VSAN “policy driven control plane” functionality out to networked storage but there’s still no GA yet. The (VASA 2 or vVol) spec’s seem to be out for awhile now, and I have heard from at least two “major” vendors that they have support in place today but VMware still isn’t announcing formal availability yet. Unclear what the hold up is, but maybe the spec’s are more in a state of flux than what’s depicted externally.

Most of this week was spent talking about NSX, VMware’ network virtualization and vCloud Hybrid Services. When they flashed the list of NSX partners on the screen Cisco was absent. Not sure what this means but perhaps there’s some concern that NSX will take revenue away from Cisco.

As for vCHS apparently this is a VMware run public cloud with two now expanding to three data centers in US, that customers can use to support their own hybrid cloud services. VMware announced that SAVVIS is now offering vCHS services as well as VMware with data centers in NY and Chicago.  There was some talk about vCHS offering object storage services like Amazon’s S3 but there was nothing specific about when. [Late update] Pat did mention that a future offering will provide DR-as-a-Service using vCHS as a target for SRM. That seems to be matching what Microsoft seems to be planning for Azzure and Hyper-V DR.

That’s about it as far as I can tell. Didn’t hear any other news on storage changes in vSphere 5.5. But this is the year of network virtualization. Can’t wait to see what they roll out next year.

HP Tech Day – StoreServ Flash Optimizations

Attended HP Tech Field Day late last month in Disneyland. Must say the venue was the best ever for HP, and getting in on Nth Generation Conference was a plus. Sorry it has taken so long for me to get around to writing about it.

We spent a day going over HP’s new converged storage, software defined storage and other storage topics. HP has segmented the Software Defined Data Center (SDDC) storage requirements into cost optimized, Software Defined Storage and SLA optimized, Service Refined Storage. Under Software Defined storage they talked about their StoreVirtual product line which is an outgrowth of the Lefthand Networks VSA, first introduced in 2007. This June, they extended SDS to include their StoreOnce VSA product to go after SMB and ROBO backup storage requirements.

We also discussed some of HP’s OpenStack integration work to integrate current HP block storage into OpenStack Cinder. They discussed some of the integrations they plan for file and object store as well.

However what I mostly want to discuss in this post is the session discussing how HP StoreServ 3PAR had optimized their storage system for flash.

They showed an SPC-1 chart depicting various storage systems IOPs levels and response times as they ramped from 10% to 100% of their IOPS rate. StoreServ 3PAR’s latest entry showed a considerable band of IOPS (25K to over 250K) all within a sub-msec response time range. Which was pretty impressive since at the time no other storage systems seemed able to do this for their whole range of IOPS. (A more recent SPC-1 result from HDS with an all-flash VSP with Hitachi Accelerated Flash also was able to accomplish this [sub-msec response time throughout their whole benchmark], only in their case it reached over 600K IOPS – read about this in our latest performance report in our newsletter, sign up above right).

  • Adaptive Read – As I understood it, this changed the size of backend reads to match the size requested by the front end. For disk systems, one often sees that a host read of say 4KB often causes a read of 16KB from the backend, with the assumption that the host will request additional data after the block read off of disk and 90% of the time spent to do a disk read is getting the head to the correct track and once there it takes almost no effort to read more data. However with flash, there is no real effort to get to a proper location to read a block of flash data and as such, there is no advantage to reading more data than the host requests, because if they come back for more one can immediately read from the flash again.
  • Adaptive Write – Similar to adaptive read, adaptive write only writes the changed data to flash. So if a host writes a 4KB block then 4KB is written to flash. This doesn’t help much for RAID 5 because of parity updates but for RAID 1 (mirroring) this saves on flash writes which ultimately lengthens flash life.
  • Adaptive Offload (destage) – This changes the frequency of destaging or flushing cache depending on the level of write activity. Slower destaging allows written (dirty) data to accumulate in cache if there’s not much write activity going on, which means in RAID 5 parity may not need to be updated as one could potentially accumulate a whole stripe’s worth of data in cache. In low-activity situations such destaging could occur every 200 msecs. whereas with high write activity destaging could occur as fast as every 3 msecs.
  • Multi-tennant IO processing – For disk drives, with sequential reads, one wants the largest stripes possible (due to head positioning penalty) but for SSDs one wants the smallest stripe sizes possible. The other problem with large stripe sizes is that devices are busy during the longer sized IO while performing the stripe writes (and reads). StoreServ modified the stripe size for SSDs to be 32KB so that other IO activity need not have to wait as long to get their turn in the (IO device) queue. The other advantage is when one is doing SSD rebuilds, with a 32KB stripe size one can intersperse more IO activity for the devices involved in the rebuild without impacting rebuild performance.

Of course the other major advantage of HP StoreServ’s 3PAR architecture provides for Flash is its intrinsic wide striping that’s done across a storage pool. This way all the SSDs can be used optimally and equally to service customer IOs.

I am certain there were other optimizations HP made to support SSDs in StoreServ storage, but these are the ones they were willing to talk publicly about.

No mention of when Memristor SSDs were going to be available but stay tuned, HP let slip that sooner or later Memristor Flash storage will be in HP storage & servers.

Comments?

Photo Credits: (c) 2013 Silverton Consulting, Inc

Has latency become the key metric? SPC-1 LRT results – chart of the month

I was at EMCworld a couple of months back and they were showing off a preview of the next version VNX storage, which was trying to achieve a million IOPS with under a millisecond latency.  Then I attended NetApp’s analyst summit and the discussion at their Flash seminar was how latency was changing the landscape of data storage and how flash latencies were going to enable totally new applications.

One executive at NetApp mentioned that IOPS was never the real problem. As an example, he mentioned one large oil & gas firm that had a peak IOPS of 35K.

Also, there was some discussion at NetApp of trying to come up with a way of segmenting customer applications by latency requirements.  Aside from high frequency trading applications, online payment processing and a few other high-performance database activities, there wasn’t a lot that could easily be identified/quantified today.

IO latencies have been coming down for years now. Sophisticated disk only storage systems have been lowering latencies for over a decade or more.   But since the introduction of SSDs it’s been a whole new ballgame.  For proof all one has to do is examine the top 10 SPC-1 LRT (least response time, measured with workloads@10% of peak activity) results.

Top 10 SPC-1 LRT results, SSD system response times

 

In looking over the top 10 SPC-1 LRT benchmarks (see Figure above) one can see a general pattern.  These systems mostly use SSD or flash storage except for TMS-400, TMS 320 (IBM FlashSystems) and Kaminario’s K2-D which primarily use DRAM storage and backup storage.

Hybrid disk-flash systems seem to start with an LRT of around 0.9 msec (not on the chart above).  These can be found with DotHill, NetApp, and IBM.

Similarly, you almost have to get to as “slow” as 0.93 msec. before you can find any disk only storage systems. But most disk only storage comes with a latency at 1msec or more. Between 1 and 2msec. LRT we see storage from EMC, HDS, HP, Fujitsu, IBM NetApp and others.

There was a time when the storage world was convinced that to get really good response times you had to have a purpose built storage system like TMS or Kaminario or stripped down functionality like IBM’s Power 595.  But it seems that the general purpose HDS HUS, IBM Storwize, and even Huawei OceanStore are all capable of providing excellent latencies with all SSD storage behind them. And all seem to do at least in the same ballpark as the purpose built, TMS RAMSAN-620 SSD storage system.  These general purpose storage systems have just about every advanced feature imaginable with the exception of mainframe attach.

It seems nowadays that there is a trifurcation of latency results going on, based on underlying storage:

  • DRAM only systems at 0.4 msec to ~0.1 msec.
  • SSD/flash only storage at 0.7 down to 0.2msec
  • Disk only storage at 0.93msec and above.

The hybrid storage systems are attempting to mix the economics of disk with the speed of flash storage and seem to be contending with all these single technology, storage solutions. 

It’s a new IO latency world today.  SSD only storage systems are now available from every major storage vendor and many of them are showing pretty impressive latencies.  Now with fully functional storage latency below 0.5msec., what’s the next hurdle for IT.

Comments?

Image: EAB 2006 by TMWolf

 

Enhanced by Zemanta

EMCworld 2013 Day 3

IMG_1431Rich Napolitano, President Unified Storage Division got up and showed some technology demonstrations of what they had working in their labs.  Rich had some of his long time engineers up on the stage to show what was running in their labs.

  • First up was a dual controller, dual processors per controller 8 core processing chips (32cores in all) running against an all SSD backend. The configuration was up for a short time but it seemed like 96 SSDs, so an all-flash VNX array.  They used Iometer, random-8KB IO to drive almost 975K IOPS at sub-msec. response time. They hit 1M IOPS with just slightly above 1 msec. response time. You could see the processor utilization of the 32 cores going up as the workload reached higher levels.  Couldn’t see precisely but all the cores were running at ~70-80% busy at the 1Miops level and it seemed like the system performance was entering the knee-of-the-curve
  • Next up was the new VNX data app store demonstration. Similar to iPhone and Android App stores. EMC has identified a select set of apps that can be run directly on VNX hardware. The current demonstration had two versions of anti-virus, Recover Point Virtual Appliance (vRPA), (v?)VPLEX, CloudAccess and MySQL server.  The engineers showed how AV software could be installed and be running on the VNX as well as how vRPA could be installed and provide onboard replication services.
  • Then, they demonstrated a VNX virtual appliance (vVNX?) which was able to run on white box server which I think was running ESX.  In this case, vVNX was running with onboard DAS storage but had all the advanced functionality of VNX
  • Finally, they showed a vVNX running in a cloud services environment. Not sure if this was VMware vCloud or some other compute cloud but Rich stated that they will support many clouds.  With vVNX running in the cloud accessing storage behind the compute engine it’s unclear what the performance would be and how one would access the storage (file or iSCSI no doubt) but it did open up new possibilities as to where one could run VNX services.

It’s readily apparent that the next iteration of VNX software seems focused on taking advantage of multi-core processing (called MCx) to boost storage system performance, providing a virtualized environment within the VNX engine to run specialized data services and supplying a new vVNX functionality which can be deployed just about anywhere you would want.

That’s all for the public sessions, spent much of the rest of the day in NDA sessions.

I had a good time at EMCworld 2013, seeing old friends again and meeting new ones and thank EMC for inviting me.  For information on previous days at EMCworld 2013 please see my Day 1 and Day 2 posts.

EMCworld 2013 day 1

Lines for coffee at the Cafe were pretty long this morning and I missed my opportunity to have breakfast to do some work. But eventually made my way to the press room and got some food and coffee.

Spent the morning in Analyst sessions mostly under NDA but it seems safe to say that EMC sees plenty of opportunity ahead.

The first session Q&A with BRS executives and customers was enlightening but the main message from the customers was that data protection is hard, legacy systems often can’t adjust quick enough and sometimes a completely new architecture is warranted. The executives were upbeat about current BRS business and where they were headed in the future.

20130506-142735.jpgRest of the morning was with Jeremy Burton EVP Product, Operations and Marketing and John Roese, the new SVP and CTO of EMC (6 months on the job). Jeremy talked about an IDC insight that there’s a new world emerging so-called 3rd platform applications based on mobile and consumer grade technology  with literally billions of users, millions of apps built on mobile-cloud-bigdata-social infrastructure which complements the 2nd platform built on lan/wan, client server frameworks.

For an example of this environment Jeremy mentioned that AT&T provisions 12PB of storage a month.

What’s needed for this new platform is a new type of storage built for the 3rd platform but taking advantage of current enterprise storage characteristics.  This is ViPR (more on that later)

John comes by way of Huawei, Nortel and myriad others and offers a broad insight to the way forward for EMC. It looks like a bright future ahead if they can do half of what John has outlined.

John talked about the intersections between the carrier market (or services), enterprise IT and consumer market.  There is convergence between these regions and at each of these intersections new technology is going to answer many of the problems which exist. For instance in the carrier space:

  • The amount of information they gather is frightening they know everything about you. Pivotal will be the key here because its good at 1) ability to correlate information across different information sources. Most carriers have a whole bunch of disparate information stores; and 2) It’s not just focused on Big Data as a non-realtime problem but also provides realtime analytics as well.
  • Capital costs are going down but $/bits are going way down.  VMware & Software defined data center is the right way to drive down costs.  Today servers are ~50% virtualized but networking is not virtualized at all.
  • Customers are dissatisfied with service providers (carriers).  Again Pivotal is key here. One carrier customer was focused on customer churn and tried to figure out how to minimize this. They used  Gemfire’ high speed infrastructure that could watchc all transactions on cell tower infrastructure pick out dropped calls, send it to Greenplum and correlate this with the customer attributes (good or bad), and within 100msec supply an interaction with the customer in to apologize and offer some services to make it better.
  • Internet is the new wild west –use at your own risk,  spoofing websites, respond to email could be anyone, chaos to security. RSA can become the trusted internet provider by looking at the internet holistically, combining information from many customers, aggregating and sharing these interactions to deterimine the trust of every transaction. Trust is becoming a new big data problem.
  • Hybrid and public cloud is their biggest opportunity but they don’t know how to attack it. VMware and SDDC will evolve to provide orchestrated movement from private to public and closed to open.

The thinking seems pretty straightforward given what they are trying to accomplish and the framework he applied to EMC’s strategy going forward made a lot of sense.

20130506-172955.jpgBrian Gallagher did a keynote on enterprise storage new functions and features which covered VMAX, VPLEX, RecoverPoint, and XtremIO/SF/SW. Mentioned RecoverPoint virtual appliance and sort of a statement of direction on being able to move application functionality directly on VMAX. He kind of demoed this with VPLEX running on VMAX.

He also talked about FAST speed of reaction versus the competition, mentioned that FAST provides information about the storage tiering to up to 4 different VMAX arrays. Showed a comparison of VMAX 10K against another prime competitor that looked downright embarrassing.  And talked about VMAX cloud edition.

20130506-173022.jpgAfter that 1 on 1 meetings all under strict NDA. But then the big Keynote with Jeremy again and David Goulden President and COO on ViPR. They have implemented software defined storage (SDS).  Last week I did a post on SDS trying to layout some of the problems and promises of SDS (please see The promise of SDS post).

But what I missed was the data path transformation that ViPR can do to provide object and HDFS access to traditional and commodity storage systems.  ViPR starts out primarily in the control layer providing automated provisioning, self management, across heterogeneous storage pools. With ViPR one can define virtual storage arrays and then configure virtual storage pools across those arrays regardless of the physical infrastructure underneath them.

More on ViPR in a separate post but suffice it to say EMC has been working on this for awhile now. But how it’s positioned with VPLEX and the other storage virtualization capabilities in VMAX and other products is another matter. But it seems they are carving out a space for ViPR between and above the current storage solutions.

End of day one is in the Expo and then cocktail parties… stay tuned for day 2.

 

The antifragility of disk RAID groups, the fragility of SSDs and what to do about it

HDA, disk, disk head-media, Hard Disk by Jeff Kubina (cc) (from Flickr)
Hard Disk by Jeff Kubina (cc) (from Flickr)

[A long post today] I picked up the book Antifragile: Things that gain be disorder,  by Nassim N. Taleb and despite trying to put it away at least 3 times now, can’t stop turning back to it.  In his view fragility is defined by having a negative (or bad) response to variation, volatility or randomness in general.  Antifragile is the exact opposite of fragile in that it has a positive (or good) response to more variation, volatility or randomness.  Somewhere between antifragility and fragility is robustness which has neither a positive or negative response (is indifferent) to high volatility, variation or randomness.

Why disks are robust …

To me there are plenty of issues with disks. To name just a few:

  • They are energy hogs,
  • They are slow (at least in comparison to SSDs and flash memory), and
  • They are mechanical contrivances which can be harmed by excess shock/vibration.

But, aside from their capacity benefits, they have a tendency to fail at a normalized failure rate unless there is a particular (special) problem with media, batch, electronics or micro-programing.  I have seen plenty of these other types of problems at StorageTek over the years to know that there are many things that can disturb disk failure rate normalization. However, in general, absent some systematic causes of failure, disk fail at a predictable rate with a relative wide distribution (although, being away from the engineering of storage systems,  I have no statistics for the standard deviation of disk failures – it just feels right [Nassim would probably disavow me for reading that]).

The other aspect of disk anti-fragility is that as they degrade over time, they seem to get slower and louder.  The former is predominantly due to defect skipping, an error recovery procedure for bad blocks.  And they get louder as bearings start to wear out, signaling eminent failure ahead.

In defect skipping when a disk drive detects a bad block, the disk drive marks the block as bad and uses a spare block it has somewhere else in the disk for all subsequent writes. The new block is typically “far” away from the old block so when reading multiple blocks the drive has to now seek to the new block and seek back to read them. increasing response time in the process.

The other phenomona that disk failures have is a head crash. These seem to occur at completely at random with disks from “mature processes”.

So, I believe disks from mature processes have a normalized failure rate with a reasonably wide standard of deviation around this MTBF rate. As such, disk drives should be classified as robust.

… and RAID groups of disk drives are antifragile

But, while disk drives are robust, when you place such devices in a RAID group with others, the RAID groups survive better.  As long as the failure rate of the devices is randomized and there is a wide variance on this failure rate, when a RAID group encounters a single drive failure it is unlikely that a second, or third (RAID DP/6) will also fail while trying to recover from the first.  (Yes, as disk drives get larger the time to recover gets longer thus increasing the probability of multiple drive failures, but absent systematic causes of drive failures, the likelihood of data loss should be rare).

In a past life we had multiple disk systems in a location subject to volcanic activity. Somehow, sulferic fumes from the volcano had found its way into the computer room and was degrading the optical transceivers in our disk drives causing drive failures.   The subsystem at the time had RAID 6 (dual parity) and over the course of a few weeks we had 20 or more disk drives die in these systems. The customer lost no data during this time but only because the disk drive failure rate was randomly distributed over time with a wide dispersion.

So from Nassim’s definition disk RAID groups are anti-fragile, they do operate better with more randomness.

Why SSD and SSD RAID groups are fragile

SSD, Toshiba's New 2.5" SSD from SSD.Toshiba.com
Toshiba’s New 2.5″ SSD from SSD.Toshiba.com

SSDs have a number of good things going for them. For example:

  • They are blistering fast,  at least compared to rotating disks.
  • They are relatively green storage devices meaning they use less energy than rotating disk
  • They are semiconductor devices and as such, are relatively immune to shock and vibration.

Despite all that, given todays propensity to use wear leveling, RAID groups composed of SSDs can exhibit fragility because all the SSDs will fail at approximately the same number of Program/Erase cycles.

My assumption, is that because NAND wear out is essentially an electro-chemical phenomenon that its failure rate, while a normalized distribution, probably has a very narrow variation.  Now given the technology NAND pages will fail after so many writes, it may be 10K, 30K or 100K (for MLC, eMLC, or SLC) but all the NAND pages from the same technology (manufactured on the same fab line) will likely fail at about the same number of P/E cycles. With wear leveling equalizing the P/E cycles across all pages in an SSD, this means that there is some number of writes that an SSD will endure and then go no farther.  (Again, I have no hard statistics to support this presumption and Nasssim will probabilistically not be pleased with me for saying this).

As such, for a RAID group made up of wear leveling SSDs especially with data stripping across the group, all the SSDs will probabilistically fail at almost same time because they all will have had the same amount of data written to them.  This means that as we reach wear out on one SSD in the group, assuming all the others were also fresh at the time of original creation of the group, then all the other devices will be near wear out.  As a result, when one SSD fails, others in the RAID group will have a high probability of failure, leading to data loss.

I have written about this before, see my Potential data loss using SSD RAID groups post for more information.

What we can do about the fragility of SSD RAID groups?

A couple of items come to mind that can be done to reduce the fragility of a RAID group of SSDs:

  • Intermix older and newer (fresher) SSDs in a single RAID group to not cause them all to fail at the same time.
  • Don’t use data striping across RAID groups of SSDs this would allow some devices to be written more than others and by doing so cause some randomness to the SSD failures in the group.
  • Don’t use RAID 1 as this will always cause the same number of writes to be written to pairs of SSDs
  • Don’t use RAID 5 or other protection methodologies that spread parity writes across the group, using these would be akin to data striping in that all parity writes would be spread evenly across the group.
  • Consider using different technology SSDs in a RAID group, if one intermixed MLC, eMLC and SLC drives in a RAID group this would have the effect of varying the SSD failure rates.
  • Move away from wear leveling to defect skipping while doing so will cause some SSDs to fail earlier than today, their failure rate will be randomly distributed.

The last one probably deserves some discussion.  There are many reasons for wear leveling one of which is to speed up writes (by always having a fresh page to write), another is that NAND blocks cannot be updated in place, they need to be erased to be written.  But another major reason is to distribute write activity across all NAND pages to equalize wear out.

In order to speed up writing sans wear leveling one would need some sort of DRAM buffer to absorb the write activity and then later destage it to NAND when available.   The inability to update in place is more problematic but could potentially be dealt with by using the same DRAM cache to read in the previous information and write back the updates.  Other solutions to this later problem exist but seem to be more problematic than they are worth.

But for the aspect of wear leveling done to equalize NAND page wearout, I believe there’s a less fragile solution.  If we were to institute some form of defect skipping with a certain amount of spare NAND pages, we could easily extend the life of an SSD, at least until we run out of spare pages.

Today, there is a considerable amount of spare capacity shipped with most SSDs, over 10% in most enterprise class storage and more with consumer grade. With this much capacity a single NAND logical block could be rewritten an awful high number of times. For instance using defect skipping, with a 100GB MLC SSD at 10K write endurance with 10% spare pages and a 1MB page size, one single logical block address page could written ~100million times (assuming no other pages were being written beyond their maximum).

The main advantage is that, now SSD failure rates would be more widely distributed. Yes there would be more early life failures, especially for SSDs that get hit a lot. But they would no longer fail in unison at some magical write level.

Making SSDs less fragile

While doing all the above may help a RAID group full of SSDs be less fragile, addressing the inherent antifragility of an SSD is more problematic.  Nonetheless, some ideas do come to mind:

  • Randomly mix NAND chips from different FABs/vendors, then the SSDs that use this intermixture could have a more randomly distributed failure rate, which should increase the standard deviation of MTBF.
  • Use different NAND technologies in an SSD, using say MLC for the bulk of the storage capacity and SLC for the defect skip capacity on an SSD (with no wear leveling). Doing this would elongate the lifetime of the average SSD and randomly distribute failures of SSDs based on write locality of reference thereby increasing the standard deviation of MTBF.  Of course this would also have the affect of speeding up heavily written blocks now coming out of SLC rather than slower MLC, making these SSDs even faster for those blocks which are written more frequently.
  • Use more random, less deterministic predictive maintenance, SSD predictive maintenance is used to limit the damage from a failing SSD by replacing it before death. By using less deterministic algorithms and more randomized algorithms  (such as how close to wear out we let the SSD get before signaling failure) we would have the impact of increasing the variance of failure.

This post is almost too long now but there are probably other ideas to increase the robustness of SSDs and PCIe Flash cards that deserve mention someplace. Maybe we can explore these in a subsequent post.

Comments?

[Full disclosure:  I have a number of desktops that use single disk drives (without RAID) that are backed up to other disk drives.  I own and use a laptop, iPads, and an iPhone that all use SSDs or NAND technology (without RAID). I have neither disk or SSD storage subsystems that I own.]

 

SNWUSA Spring 2013 summary

SNWUSA, SNIA, partyFor starters the parties were a bit more subdued this year although I heard Wayne’s suite was hopping to 4am last night (not that I would ever be there that late).

And a trend seen the past couple of years was even more evident this year, many large vendors and vendor spokespeople went missing. I heard that there were a lot more analyst presentations this SNW than prior ones although it was hard to quantify.  But it did seem that the analyst community was pulling double duty in presentations.

I would say that SNW still provides a good venue for storage marketing across all verticals. But these days many large vendors find success elsewhere, leaving SNW Expo mostly to smaller vendors and niche products.  Nonetheless, there were a\ a few big vendors (Dell, Oracle and HP) still in evidence. But EMC, HDS, IBM and NetApp were not   showing on the floor.

I would have to say the theme for this years SNW was hybrid storage. It seemed last fall the products that impressed me were either cloud storage gateways or all flash arrays but this year there weren’t as many of these at the show but hybrid storage certainly has arrived.

Best hybrid storage array of the show

It’s hard to pick just one hybrid storage vendor as my top pick, especially since there were at least 3 others talking to me under NDA, but from my perspective the Hybrid vendor of the show had to be Tegile (pronounced I think, as te’-jile). They seemed to have a fully functional system with snapshot, thin provisioning, deduplication and pretty good VMware support (only time I have heard a vendor talk about VASA “stun” support for thin provisioned volumes).

They made the statement that SSD in their system is used as a cache, not a tier. This use is similar to NetApp’s FlashCache and has proven to be a particularly well performing approach to use of hybrid storage. (For more information on that take a look at some of my NFS and recent SPC-1 benchmark review dispatches. How well this is integrated with their home grown dedupe logic is another question.

On the negative side, they seem to be lacking a true HA/dual controller version but could use two separate systems with synch (I think) replication between them to cover this ground?? They also claimed their dedupe had no performance penalty, a pretty bold claim that cries out for some serious lab validation and/or benchmarking to prove. They also offer an all flash version of their storage (but then how can it be used as a cache?).

The marketing team seemed pretty knowledgeable about the market space and they seem to be going after mid-range storage space.

The product supports file (NFS and CIFS/SMB), iSCSI and FC with GigE, 10GbE and 8Gbps FC. They quote “effective” capacities with dedupe enabled but it can be disabled on a volume basis.

Overall, I was impressed by their marketing and the product (what little I saw).

Best storage tool of the show

Moving onto other product categories, it was hard to see anything that caught my eye. Perhaps I have just been to too many storage conferences but I did get somewhat excited when I looked at SwiftTest.  Essentially they offer a application profiling, storage modeling, workload generating tool set.

The team seems to be branching out of their traditional vendor market focus and going after large service providers and F100 companies with large storage requirements.

Way back, when I was in Engineering, we were always looking for some information as to how customers actually used storage products. Well what SwiftTest has, is an appliance to instrument your application environment (through network taps/hardware port connections) to monitor your storage IO and create a statistical operational profile of your I/O environment. Then take that profile and play it against a storage configuration model to show how well it’s likely to perform.  And if that’s not enough the same appliance can be used to drive a simulated version of the operational profile back onto a storage system.

It offers NFS (v2,v3, v4) CIFS/SMB (SMB1, SMB2, SMB3) FC, iSCSI, and HTTP/REST (what no FCoe?). They mentioned an $8oK price tag for the base appliance (one protocol?) but grows up pretty fast, if you want all of them.  They also seem to have three levels of appliances (my guess more performance and more protocols come with the bigger boxes).

Not sure where they top out but simulating an operational profile can be quite complex especially when you have to be able to control data patterns to match deduplication potential in customer data, drive markov chains with probability representations of operational profiles, and actually execute IO operations. They said something about their ports have dedicated CPU cores to insure adequate performance or something similar but still it seems to little to hit high IO workloads.

Like I said, when I was in engineering were searching for this type of solution back in the late 90s and we would have probably bought it in a moment, if it was available.

GoDaddy.com, the domain/web site services provider was one of their customers that used the appliance to test storage configurations. They presented at SNW on some of their results but I missed their session (the case study is available on SwiftTests website).

~~~~

In short, SNW had a diverse mixture of end user customers but lacked a full complement of vendors to show off to them.   The ratio of vendors to customers has definitely shifted to end-users the last couple of years and if anything has gotten more skewed to end-users, (which paradoxically should appeal to more storage vendors?!).

I talked with lots of end-users, from companies like FedEx, Northrop Grumman and AOL to name just a few big ones. But there were plenty of smaller ones as well.

The show lasted three days and had sessions scheduled all the way to the end. I was surprised at the length and the fact that it started on Tuesday rather than Monday as in years past.  Apparently, SNIA and Computerworld are still tweaking the formula.

It seemed to me there were more cancelled sessions than in years past but again this was hard to quantify.

Some of the customers I talked with thought SNW should go to a once a year and can’t understand why it’s still twice a year.  Many mentioned VMworld as having taken the place of SNW in being a showplace for storage vendors of all sizes and styles.  That and the vendor specific shows from EMC, IBM, Dell and others.

The fall show is moving to Long Beach, CA. Probably, a further experiment to find a formula that works.  Let’s hope they succeed.

Comments?