IT in space

Read an article last week about all the startup activity that’s taking place in space systems and infrastructure (see: As rocket companies proliferate … new tech emerges leading to a new space race). This is a consequence of cheap(er) launch systems from SpaceX, Blue Origin, Rocket Lab and others.

SpaceBelt, storage in space

One startup that caught my eye was SpaceBelt from Cloud Constellation Corporation, that’s planning to put PB (4X library of congress) of data storage in a constellation of LEO satellites.

The LEO storage pool will be populated by multiple nodes (satellites) with a set of geo-synchronous access points to the LEO storage pool. Customers use ground based secure terminals to talk with geosynchronous access satellites which communicate to the LEO storage nodes to access data.

Their main selling points appear to be data security and availability. The only way to access the data is through secured satellite downlinks/uplinks and then you only get to the geo-synchronous satellites. From there, those satellites access the LEO storage cloud directly. Customers can’t access the storage cloud without going through the geo-synchronous layer first and the secured terminals.

The problem with terrestrial data is that it is prone to security threats as well as natural disasters which take out a data center or a region. But with all your data residing in a space cloud, such concerns shouldn’t be a problem. (However, gaining access to your ground stations is a whole different story.

AWS and Lockheed-Martin supply new ground station service

The other company of interest is not a startup but a link up between Amazon and Lockheed Martin (see: Amazon-Lockheed Martin …) that supplies a new cloud based, satellite ground station as a service offering. The new service will use Lockheed Martin ground stations.

Currently, the service is limited to S-Band and attennas located in Denver, but plans are to expand to X-Band and locations throughout the world. The plan is to have ground stations located close to AWS data centers, so data center customers can have high speed, access to satellite data.

There are other startups in the ground station as a service space, but none with the resources of Amazon-Lockheed. All of this competition is just getting off the ground, but a few have been leasing idle ground station resources to customers. The AWS service already has a few big customers, like DigitalGlobe.

One thing we have learned, is that the appeal of cloud services is as much about the ecosystem that surrounds it, as the service offering itself. So having satellite ground stations as a service is good, but having these services, tied directly into other public cloud computing infrastructure, is much much better. Google, Microsoft, IBM are you listening?

Data centers in space

Why stop at storage? Wouldn’t it be better to support both storage and computation in space. That way access latencies wouldn’t be a concern. When terrestrial disasters occur, it’s not just data at risk. Ditto, for security threats.

Having whole data centers, would represent a whole new stratum of cloud computing. Also, now IT could implement space native applications.

If Microsoft can run a data center under the oceans, I see no reason they couldn’t do so in orbit. Especially when human flight returns to NASA/SpaceX. Just imagine admins and service techs as astronauts.

And yet, security and availability aren’t the only threats one has to deal with. What happens to the space cloud when war breaks out and satellite killers are set loose.

Yes, space infrastructure is not subject to terrestrial disasters or internet based security risks, but there are other problems besides those and war that exist such as solar storms and space debris clouds. .

In the end, it’s important to have multiple, non-overlapping risk profiles for your IT infrastructure. That is each IT deployment, may be subject to one set of risks but those sets are disjoint with another IT deployment option. IT in space, that is subject to solar storms, space debris, and satellite killers is a nice complement to terrestrial cloud data centers, subject to natural disasters, internet security risks, and other earth-based, man made disasters.

On the other hand, a large, solar storm like the 1859 one, could knock every data system on the world or in orbit, out. As for under the sea, it probably depends on how deep it was submerged!!

Photo Credit(s): Screen shots from SpaceBelt youtube video (c) SpaceBelt

Screens shot from AWS Ground Station as a Service sign up page (c) Amazon-Lockheed

Screen shots from Microsoft’s Under the sea news feature (c) Microsoft

Western Digital at SFD15: ActiveScale object storage

Phill Bullinger and his staff from Western Digital presented at Storage Field Day 15 (SFD15) on a number of their enterprise products including Tegile and IntelliFlash but the one that caught my interest was their ActiveScale object store acquired from Amplidata back in 2015.

ActiveScale is an onprem, object storage system that provides cloud-like  economics for customer data.

ActiveScale Hardware

ActiveScale systems can both scale up and scale out within a single site. ActiveScale systems have both  storage and system nodes. Storage nodes perform erasure coding and System nodes are control points and metadata managers for the object store.

ActiveScale comes in two appliance configurations that contain both storage and system nodes and storage required.  The two appliances are:

  • ActiveScale P100 is a 7U 720TB pod system and A full rack of P100s can read 8GB/sec and can have 17-9s data availability. The P100 can scale up to 2.1PB in a single rack and up to 18PB in the same namespace. The P100 is a higher performing solution with better performing storage and system nodes
  • ActiveScale X100 is a 42U rack scale solution that holds up to 588 12TB drives or 5.8PB per rack. The X100 can scale up to 9 racks or 52PB in the same namespace. The X100 is a denser configuration with only 6 storage nodes and as such, has a better $/GB than the P100 above.

As WDC is both the supplier of the ActiveScale appliance and a supplier of disk storage they can be fairly aggressive with pricing on appliance systems.

Data integrity in ActiveScale

They make a point of saying that ActiveScale object metadata and data are stored separately. By separating data and metadata, they claim to be  more resilient to system failures. Object metadata is 3 way replicated, in a replicated database, residing in system nodes. Other object systems often store metadata and object data in the same way.

Object data can be erasure coded. That is, object data is chunked, erasure coding protected and then spread across multiple disk drives for data protection. ActiveScale erasure coding is called BitSpread. With BitSpread customers identify the number of disk drives to spread object data across and the number of drive failures the system should recover from without data loss.

A typical BitSpread configuration splits object data into 18 chunks and spreads these chunks across storage columns. A storage column is from 6-18 storage nodes. There’s no pre-allocated space in BitSpread. Object data chunks are allocated to disk storage based on current capacity and performance of the system, within redundancy constraints.

In addition, ActiveScale has a background task called BitDynamics that scans  erasure coded chunks and does a mathematical health check on the object data. If a chunk is bad, the object data chunk can be recovered and re-erasure coded back to proper health.

WDC performance testing shows that BitDynamics has 0 performance degradation when performing re-erasure coding. Indeed, they took out 98 drives in an ActiveScale cluster and BitDynamics re-coded all that data onto other disk drives and detected no performance impact. No indication how long  re-encoding 98 disk drives of data took nor the % of object store capacity utilization at the time of the test but presumably there’s a report someplace to back this up

Unlike many public cloud based object storage systems, ActiveScale is strongly consistent. That is object puts (writes) are not responded back to the entity doing the put,  until the object metadata and object data are properly and safely recorded in the object store.

ActiveScale also supports 3 site erasure coding. GeoSpread is their approach to erasure coding across sites. In this case, object metadata is replicated across 3 system nodes across the sites. Object data and erasure coded information is split into 20 chunks which are then spread across the three sites.  This way if any one site goes down, the other two sites have sufficient metadata, object data chunks and erasure coded information to reconstruct the data.

ActiveScale 5.2 now supports asynch replication. That is any one ActiveScale cluster can replicate to any other ActiveScale cluster located continent distances away.

Unclear how GeoSpread and asynch replication would interact together, but my guess is that each of the 3 GeoSpread sites could be asynchronously replicated to 3 other sites for maximum redundancy.

Both GeoSpread and ActiveScale replication impact performance,  depending on how far the sites are from one another and the speed and bandwidth of the links between sites.

ActiveScale markets

ActiveScale’s biggest market is media and entertainment (M&E), mostly used for media archive or tape replacement/augmentation. WDC showed one customer case study for the Montreaux Jazz Festival, which migrated 49 years of performance videos up to ActiveScale and can now stream any performance, on request, without delay. Montreax media is GeoSpread across 3 sites in France. Another option is to perform transcoding on the object media in realtime and stream the transcoded media.

Another large market is Bio/Life Sciences. Medical & biological scanners are transitioning to higher resolution scans which take more data space. And this sort of medical information needs to be kept a long time

Data analytics on ActiveScale

One other emerging market is data analytics. With the new S3A (S3 adapter), Hadoop clusters can now support object storage as a 2nd tier. One problem with data analytics is that they have lots of data and storing it in triplicate, costs an awful lot.

In big data world, datasets can get very large very quickly. Indeed PB sizes data sets aren’t that unusual. And with triple replication (in native HDFS). When HDFS runs out of space you have to delete data. Before S3A, the only way you could increase storage you had to scale out (with compute and storage and networking) in order to add capacity.

Using Hadoop’s S3A, ActiveScale’s can provide cold archive for data analytics.  From a Hadoop user/application perspective, S3A ActiveScale storage looks like just another directory under HDFS (Hadoop Data File System). You can run MapReduce or other Hadoop application directly against object buckets. But a more realistic approach is to move inactive or cold data from an disk resident HDFS directory to a S3A directory

HDFS and MapReduce are tightly coupled and were designed to have data close to where computation happens. So,  as long as the active data or working set data is on HDFS disk storage or directly in memory the rest of the (inactive) data could all be placed on S3A object storage. Inactive data is normally historical data no longer being actively analyzed while newer data would be actively analyzed. Older, inactive data can be manually or automatically archived off to S3A. With HIVE you can partition your database to have active data in HDFS disk storage and inactive data in S3A.

Another approach is if the active, working set data can all fit directly in memory then the data can reside on S3A object storage. This way the data is read from S3A storage into memory, analyzed there and output be done back to object store or HDFS disk. Because the data is only read (loaded) once, there’s only a minimal performance penalty to use S3A storage.

Western Digital is an active contributor to Hadoop S3A and have recently added performance improvements to S3A, such as better caching, partial object reading, and core XML performance tuning options.

~~~~
If your interested in learning more about Western Digital ActiveScale, check out the videos referenced earlier and their website.

Also you may be interested in these other posts on the WD sessions at SFD15:

The A is for Active, The S is for Scale by Dan Firth (@PenguinPunk)

Comments?

The wizardry of StorMagic

We talked with Hans O’Sullivan, CEO and Chris Farey, CTO of StorMagic during Storage Field Days 6 (SFD6, view videos of their session) a couple of weeks back and they presented some interesting technology, at least to me.

Their SvSAN, software defined storage  solution has been around since 2009, and was originally intended to provide shared storage for SMB environments but was changed in 2011 to focus more on remote offices/branch offices (ROBO) for larger customers.

What makes the SvSAN such an appealing solution is that it’s a software-only storage solution that can use a minimum of 2 servers to provide a high availability, shared block storage cluster which can all be managed from one central site. Their SvSAN installs as a virtual storage appliance that runs as a virtual machine under a hypervisor and you can assign it to manage as much or as little of the direct access or SAN attached storage available to the server.

SvSAN customers

As of last count they had 30K licenses, in 64 countries, across 6 continents, were managing over 57PB of data, and had one (large retail) customer with over 2000 sites managed from one central location.  They had pictures of one customer in their presentation which judging by the color was obvious who it was but they couldn’t actually say.

One customer with a 1000’s of sites had prior storage that was causing 100’s of store outages a year, each of which averaged 6 hours to recover which cost them $6K each. Failure cost could be much larger and much longer, if there was a data loss.  They obviously needed a much more reliable storage system and wanted to reduce their cost of maintenance. Turning to SvSAN saved them lot’s of $s and time and eliminated their maintenance downtime.

Their largest vertical is retail but StorMagic does well in most ROBO environments which have limited IT staff, and limited data requirements. Other verticals they mentioned included defense (they specifically mentioned the German Army who have a parachute deployable, all-SSD SvSAN storage/data center), manufacturing (with small remote factories), government with numerous sites around the world, financial services (banks with many remote offices), restaurant and hotel chains, large energy companies, wind farms, etc.  Hans mentioned one a large wind farm operator that said their “field” data centers were so remote it took 6 days to get someone out to them to solve a problem but they needed 600GBs of shared storage to manage the complex.

SvSAN architecture

SvSAN uses synchronous mirroring between pairs of servers so that the data is constantly available in both servers of a pair. Presumably the amount of storage available to the SvSAN VSA’s running in the two servers have to be similar in capacity and performance.

An SvSAN cluster can grow by adding pairs of servers or by adding storage to an already present SvSAN cluster. One can have as many pairs of servers in an SvSAN local cluster as you want (probably some maximum here but I can’t recall what they said). The cluster interconnect is 1GbE or 10GbE. Most (~90%) of SvSAN implementations are under 2TB of data but their largest single clustered configuration is 200TB.

SvSAN supplies iSCSI storage services and runs inside a Linux virtual machine. But SvSAN can support both bare metal as well as virtualized server environments.

All the storage within a server that is assigned to SvSAN is pooled together and carved out as iSCSI virtual disks.  SvSAN can make use of raid controller with JBODs, DAS or even SAN storage, anything that is accessible to a virtual machine can be configured as part of SvSAN’s storage pool.

Servers that are accessing the shared iSCSI storage may access either of the servers in a synchronous mirrored pair. As it’s a synchronous mirror, any writes written to one of the servers is automatically mirrored to the other side before an acknowledgement is sent back to the host. Synchronous mirroring depends on multi-pathing software at the host.

As in any solution that supports active-active read-write access there is a need for a Quorum service to be hosted somewhere in the environment. Hopefully, at some location distinct from where a problem could potentially occur, but it doesn’t have to be. In StorMagic’s case this could reside on any physical server, even in the same environment. The Quorum service is there to decide which of the two copies is “more” current when there is some sort of split brain scenario. That is when the two servers in a synchronized pair lose communication with one another. At that point the Quorum service declares one dead and the other active and from that point on all IO activity must be done through the active SvSAN server. The Quorum service can also run on Linux or Windows and remotely or locally. Any configuration changes will need to be communicated to the Quorum service.

They have a bare metal recovery solution. Specifically, when one server fails, customers can ship out another server with a matching configuration to be installed in the remote site. When the new server comes up, it auto-configures it’s storage and networking by using the currently active server in the environment and starts a resynchronization process with that server. Which all means it can be brought up into a high availability mode with almost no IT support other than what it takes to power the server and connect some networking ports. This was made for ROBO!

Code upgrades can be done by taking one of the pair of servers down and loading the new code and resynching it’s data. Then once resynch completes you can do the same with the other server.

They support a fast-resynch service for when one of the pair goes down for any reason. At that point the active server starts tracking any changes that occur in a journal and when the other server comes up it just resends the changes that have occurred since the last time it was up.

SvSAN has support for SSDs and just released an SSD write back caching feature to help improve disk write speeds. They also support an all SSD configuration for harsh environments.

StorMagic also offers an option for non-mirrored disk but I can’t imagine why anyone would use it.

They can dynamically move one mirrored iSCSI volume from one pair of servers to another, without disrupting application activity.

Minimum hardware configuration requires a single core server but can use as many cores that you can give it. StorMagic commented that a single core maxes out at 50-60K IOPS but you can always just add more cores to the solution.

The SvSAN cluster can be managed in VMware vCenter or Microsoft System Center (MSSC) and it maintains statistics which help monitor the storage clusters in the remote office environments.

They also have a scripted recipe to help bring up multiple duplicate remote sites where local staff only need to plug in minimal networking and some storage information and they are ready to go.

SvSAN pricing and other information

Their product lists for 2 servers and 2TB of data storage is $2K and they have standard license options for 4, 8, and 16TB across a server pair after which it’s unlimited amounts of storage for the same price of $10K. This doesn’t include hardware or physical data storage this is just for the SvSAN software and management.

They offer a free 60 day evaluation license on their website (see link above).

There was a lot of twitter traffic and onsite discussion as to how this compared to HP’s StorVirtual VSA solution. The contention was that StorVirtual required more nodes but there was no-one from HP there to dispute this.

Didn’t hear much about snapshot, thin provisioning, remote replication, deduplication or encryption. But for ROBO office environments, that are typically under 2TB most of these features are probably overkill, especially when there’s no permanent onsite IT staff to support the local storage environment.

~~~~

I had talked with StorMagic previously at one or more of the storage/IT conferences we have attended at the past and had relegated them to SMB  storage solutions. But after talking with them at SFD6, their solution became quite clearer. All of the sophisticated functionality they have developed together with their software only solution, seems to be  very appealing solution for these ROBO environments.

 

 

 

Who’s the next winner in data storage?

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

“The future is already here – just not evenly distributed”, W. Gibson

It starts as it always does outside the enterprise data center. In the line of businesses, in the development teams, in the small business organizations that don’t know any better but still have an unquenchable need for data storage.

It’s essentially an Innovator’s Dillemma situation. The upstarts are coming into the market at the lower end, lower margin side of the business that the major vendors don’t seem to care about, don’t service very well and are ignoring to their peril.

Yes, it doesn’t offer all the data services that the big guns (EMC, Dell, HDS, IBM, and NetApp) have. It doesn’t offer the data availability and reliability that enterprise data centers have come to demand from their storage. require. And it doesn’t have the performance of major enterprise data storage systems.

But what it does offer, is lower CapEx, unlimited scaleability, and much easier to manage and adopt data storage, albeit using a new protocol. It does have some inherent, hard to get around problems not the least of which is speed of data ingest/egress, highly variable latency and eventual consistency. There are other problems which are more easily solvable, with work, but the three listed above are intrinsic to the solution and need to be dealt with systematically.

And the winner is …

It has to be cloud storage providers and the big elephant in the room has to be Amazon. I know there’s a lot of hype surrounding AWS S3 and EC2 but you must admit that they are growing, doubling year over year. Yes it is starting from a much lower capacity point and yes, they are essentially providing “rentable” data storage space with limited or even non-existant storage services. But they are opening up whole new ways to consume storage that never existed before. And therein lies their advantage and threat to the major storage players today, unless they act to counter this upstart.

On AWS’s EC2 website there must be 4 dozen different applications that can be fired up in the matter of a click or two. When I checked out S3 you only need to signup and identify a bucket name to start depositing data (files, objects). After that, you are charged for the storage used, data transfer out (data in is free), and the number of HTTP GETs, PUTs, and other requests that are done on a per month basis. The first 5GB is free and comes with a judicious amount of gets, puts, and out data transfer bandwidth.

… but how can they attack the enterprise?

Aside from the three systemic weaknesses identified above, for enterprise customers they seem to lack enterprise security, advanced data services and high availability storage. Yes, NetApp’s Amazon Direct addresses some of the issues by placing enterprise owned, secured and highly available storage to be accessed by EC2 applications. But to really take over and make a dent in enterprise storage sales, Amazon needs something with enterprise class data services, availability and security with an on premises storage gateway that uses and consumes cloud storage, i.e., a cloud storage gateway. That way they can meet or exceed enterprise latency and services requirements at something that approximates S3 storage costs.

We have talked about cloud storage gateways before but none offer this level of storage service. An enterprise class S3 gateway would need to support all storage protocols, especially block (FC, FCoE, & iSCSI) and file (NFS & CIFS/SMB). It would need enterprise data services, such as read-writeable snapshots, thin provisioning, data deduplication/compression, and data mirroring/replication (synch and asynch). It would need to support standard management configuration capabilities, like VMware vCenter, Microsoft System Center, and SMI-S. It would need to mask the inherent variable latency of cloud storage through memory, SSD and hard disk data caching/tiering.. It would need to conceal the eventual consistency nature of cloud storage (see link above). And it would need to provide iron-clad, data security for cloud storage.

It would also need to be enterprise hardened, highly available and highly reliable. That means dually redundant, highly serviceable hardware FRUs, concurrent code load, multiple controllers with multiple, independent, high speed links to the internet. Todays, highly-available data storage requires multi-path storage networks, multiple-independent power sources and resilient cooling so adding multiple-independent, high-speed internet links to use Amazon S3 in the enterprise is not out of the question. In addition to the highly available and serviceable storage gateway capabilities described above it would need to supply high data integrity and reliability.

Who could build such a gateway?

I would say any of the major and some of the minor data storage players could easily do an S3 gateway if they desired. There are a couple of gateway startups (see link above) that have made a stab at it but none have it quite down pat or to the extent needed by the enterprise.

However, the problem with standalone gateways from other, non-Amazon vendors is that they could easily support other cloud storage platforms and most do. This is great for gateway suppliers but bad for Amazon’s market share.

So, I believe Amazon has to invest in it’s own storage gateway if they want to go after the enterprise. Of course, when they create an enterprise cloud storage gateway they will piss off all the other gateway providers and will signal their intention to target the enterprise storage market.

So who is the next winner in data storage – I have to believe its going to be and already is Amazon. Even if they don’t go after the enterprise which I feel is the major prize, they have already carved out an unbreachable market share in a new way to implement and use storage. But when (not if) they go after the enterprise, they will threaten every major storage player.

Yes but what about others?

Arguably, Microsoft Azure is in a better position than Amazon to go after the enterprise. Since their acquisition of StorSimple last year, they already have a gateway that with help, could be just what they need to provide enterprise class storage services using Azure. And they already have access to the enterprise, already have the services, distribution and goto market capabilities that addresses enterprise needs and requirements. Maybe they have it all but they are not yet at the scale of Amazon. Could they go after this – certainly, but will they?

Google is the other major unknown. They certainly have the capability to go after enterprise cloud storage if they want. They already have Google Cloud Storage, which is priced under Amazon’s S3 and provides similar services as far as I can tell. But they have even farther to go to get to the scale of Amazon. And they have less of the marketing, selling and service capabilities that are required to be an enterprise player. So I think they are the least likely of the big three cloud providers to be successful here.

There are many other players in cloud services that could make a play for enterprise cloud storage and emerge out of the pack, namely Rackspace, Savvis, Terremark and others. I suppose DropBox, Box and the other file sharing/collaboration providers might also be able to take a shot at it, if they wanted. But I am not sure any of them have enterprise storage on their radar just yet.

And I wouldn’t leave out the current major storage, networking and server players as they all could potentially go after enterprise cloud storage if they wanted to. And some are partly there already.

Comments?

 

Enhanced by Zemanta

The antifragility of disk RAID groups, the fragility of SSDs and what to do about it

HDA, disk, disk head-media, Hard Disk by Jeff Kubina (cc) (from Flickr)
Hard Disk by Jeff Kubina (cc) (from Flickr)

[A long post today] I picked up the book Antifragile: Things that gain be disorder,  by Nassim N. Taleb and despite trying to put it away at least 3 times now, can’t stop turning back to it.  In his view fragility is defined by having a negative (or bad) response to variation, volatility or randomness in general.  Antifragile is the exact opposite of fragile in that it has a positive (or good) response to more variation, volatility or randomness.  Somewhere between antifragility and fragility is robustness which has neither a positive or negative response (is indifferent) to high volatility, variation or randomness.

Why disks are robust …

To me there are plenty of issues with disks. To name just a few:

  • They are energy hogs,
  • They are slow (at least in comparison to SSDs and flash memory), and
  • They are mechanical contrivances which can be harmed by excess shock/vibration.

But, aside from their capacity benefits, they have a tendency to fail at a normalized failure rate unless there is a particular (special) problem with media, batch, electronics or micro-programing.  I have seen plenty of these other types of problems at StorageTek over the years to know that there are many things that can disturb disk failure rate normalization. However, in general, absent some systematic causes of failure, disk fail at a predictable rate with a relative wide distribution (although, being away from the engineering of storage systems,  I have no statistics for the standard deviation of disk failures – it just feels right [Nassim would probably disavow me for reading that]).

The other aspect of disk anti-fragility is that as they degrade over time, they seem to get slower and louder.  The former is predominantly due to defect skipping, an error recovery procedure for bad blocks.  And they get louder as bearings start to wear out, signaling eminent failure ahead.

In defect skipping when a disk drive detects a bad block, the disk drive marks the block as bad and uses a spare block it has somewhere else in the disk for all subsequent writes. The new block is typically “far” away from the old block so when reading multiple blocks the drive has to now seek to the new block and seek back to read them. increasing response time in the process.

The other phenomona that disk failures have is a head crash. These seem to occur at completely at random with disks from “mature processes”.

So, I believe disks from mature processes have a normalized failure rate with a reasonably wide standard of deviation around this MTBF rate. As such, disk drives should be classified as robust.

… and RAID groups of disk drives are antifragile

But, while disk drives are robust, when you place such devices in a RAID group with others, the RAID groups survive better.  As long as the failure rate of the devices is randomized and there is a wide variance on this failure rate, when a RAID group encounters a single drive failure it is unlikely that a second, or third (RAID DP/6) will also fail while trying to recover from the first.  (Yes, as disk drives get larger the time to recover gets longer thus increasing the probability of multiple drive failures, but absent systematic causes of drive failures, the likelihood of data loss should be rare).

In a past life we had multiple disk systems in a location subject to volcanic activity. Somehow, sulferic fumes from the volcano had found its way into the computer room and was degrading the optical transceivers in our disk drives causing drive failures.   The subsystem at the time had RAID 6 (dual parity) and over the course of a few weeks we had 20 or more disk drives die in these systems. The customer lost no data during this time but only because the disk drive failure rate was randomly distributed over time with a wide dispersion.

So from Nassim’s definition disk RAID groups are anti-fragile, they do operate better with more randomness.

Why SSD and SSD RAID groups are fragile

SSD, Toshiba's New 2.5" SSD from SSD.Toshiba.com
Toshiba’s New 2.5″ SSD from SSD.Toshiba.com

SSDs have a number of good things going for them. For example:

  • They are blistering fast,  at least compared to rotating disks.
  • They are relatively green storage devices meaning they use less energy than rotating disk
  • They are semiconductor devices and as such, are relatively immune to shock and vibration.

Despite all that, given todays propensity to use wear leveling, RAID groups composed of SSDs can exhibit fragility because all the SSDs will fail at approximately the same number of Program/Erase cycles.

My assumption, is that because NAND wear out is essentially an electro-chemical phenomenon that its failure rate, while a normalized distribution, probably has a very narrow variation.  Now given the technology NAND pages will fail after so many writes, it may be 10K, 30K or 100K (for MLC, eMLC, or SLC) but all the NAND pages from the same technology (manufactured on the same fab line) will likely fail at about the same number of P/E cycles. With wear leveling equalizing the P/E cycles across all pages in an SSD, this means that there is some number of writes that an SSD will endure and then go no farther.  (Again, I have no hard statistics to support this presumption and Nasssim will probabilistically not be pleased with me for saying this).

As such, for a RAID group made up of wear leveling SSDs especially with data stripping across the group, all the SSDs will probabilistically fail at almost same time because they all will have had the same amount of data written to them.  This means that as we reach wear out on one SSD in the group, assuming all the others were also fresh at the time of original creation of the group, then all the other devices will be near wear out.  As a result, when one SSD fails, others in the RAID group will have a high probability of failure, leading to data loss.

I have written about this before, see my Potential data loss using SSD RAID groups post for more information.

What we can do about the fragility of SSD RAID groups?

A couple of items come to mind that can be done to reduce the fragility of a RAID group of SSDs:

  • Intermix older and newer (fresher) SSDs in a single RAID group to not cause them all to fail at the same time.
  • Don’t use data striping across RAID groups of SSDs this would allow some devices to be written more than others and by doing so cause some randomness to the SSD failures in the group.
  • Don’t use RAID 1 as this will always cause the same number of writes to be written to pairs of SSDs
  • Don’t use RAID 5 or other protection methodologies that spread parity writes across the group, using these would be akin to data striping in that all parity writes would be spread evenly across the group.
  • Consider using different technology SSDs in a RAID group, if one intermixed MLC, eMLC and SLC drives in a RAID group this would have the effect of varying the SSD failure rates.
  • Move away from wear leveling to defect skipping while doing so will cause some SSDs to fail earlier than today, their failure rate will be randomly distributed.

The last one probably deserves some discussion.  There are many reasons for wear leveling one of which is to speed up writes (by always having a fresh page to write), another is that NAND blocks cannot be updated in place, they need to be erased to be written.  But another major reason is to distribute write activity across all NAND pages to equalize wear out.

In order to speed up writing sans wear leveling one would need some sort of DRAM buffer to absorb the write activity and then later destage it to NAND when available.   The inability to update in place is more problematic but could potentially be dealt with by using the same DRAM cache to read in the previous information and write back the updates.  Other solutions to this later problem exist but seem to be more problematic than they are worth.

But for the aspect of wear leveling done to equalize NAND page wearout, I believe there’s a less fragile solution.  If we were to institute some form of defect skipping with a certain amount of spare NAND pages, we could easily extend the life of an SSD, at least until we run out of spare pages.

Today, there is a considerable amount of spare capacity shipped with most SSDs, over 10% in most enterprise class storage and more with consumer grade. With this much capacity a single NAND logical block could be rewritten an awful high number of times. For instance using defect skipping, with a 100GB MLC SSD at 10K write endurance with 10% spare pages and a 1MB page size, one single logical block address page could written ~100million times (assuming no other pages were being written beyond their maximum).

The main advantage is that, now SSD failure rates would be more widely distributed. Yes there would be more early life failures, especially for SSDs that get hit a lot. But they would no longer fail in unison at some magical write level.

Making SSDs less fragile

While doing all the above may help a RAID group full of SSDs be less fragile, addressing the inherent antifragility of an SSD is more problematic.  Nonetheless, some ideas do come to mind:

  • Randomly mix NAND chips from different FABs/vendors, then the SSDs that use this intermixture could have a more randomly distributed failure rate, which should increase the standard deviation of MTBF.
  • Use different NAND technologies in an SSD, using say MLC for the bulk of the storage capacity and SLC for the defect skip capacity on an SSD (with no wear leveling). Doing this would elongate the lifetime of the average SSD and randomly distribute failures of SSDs based on write locality of reference thereby increasing the standard deviation of MTBF.  Of course this would also have the affect of speeding up heavily written blocks now coming out of SLC rather than slower MLC, making these SSDs even faster for those blocks which are written more frequently.
  • Use more random, less deterministic predictive maintenance, SSD predictive maintenance is used to limit the damage from a failing SSD by replacing it before death. By using less deterministic algorithms and more randomized algorithms  (such as how close to wear out we let the SSD get before signaling failure) we would have the impact of increasing the variance of failure.

This post is almost too long now but there are probably other ideas to increase the robustness of SSDs and PCIe Flash cards that deserve mention someplace. Maybe we can explore these in a subsequent post.

Comments?

[Full disclosure:  I have a number of desktops that use single disk drives (without RAID) that are backed up to other disk drives.  I own and use a laptop, iPads, and an iPhone that all use SSDs or NAND technology (without RAID). I have neither disk or SSD storage subsystems that I own.]

 

HDS Influencer Summit wrap up

[Sorry for the length, it was a long day] There was an awful lot of information suppied today. The morning sessions were all open but most of the afternoon was under NDA.

Jack Domme,  HDS CEO started the morning off talking about the growth in HDS market share.  Another 20% y/y growth in revenue for HDS.  They seem to be hitting the right markets with the right products.  They have found a lot of success in emerging markets in Latin America, Africa and Asia.  As part of this thrust into emerging markets HDS is opening up a manufacturing facility in Brazil and a Sales/Solution center in Columbia.

Jack spent time outlining the infrastructure cloud to content cloud to information cloud transition that they believe is coming in the IT environment of the future.   In addition, there has been even greater alignment within Hitachi Ltd and consolidation of engineering teams to tackle new converged infrastructure needs.

Randy DeMont, EVP and GM Global Sales, Services and Support got up next and talked about their success with the channel. About 50% of their revenue now comes from indirect sources. They are focusing some of their efforts to try to attract global system integrators that are key purveyors to Global 500 companies and their business transformation efforts.

Randy talked at length about some of their recent service offerings including managed storage services. As customers begin to trust HDS with their storage they are start considering moving their whole data center to HDS. Randy said this was a $1B opportunity for HDS and the only thing holding them back is finding the right people with the skills necessary to provide this service.

Randy also mentioned that over the last 3-4 years HDS has gained 200-300 new clients a quarter, which is introducing a lot of new customers to HDS technology.

Brian Householder, EVP, WW Marketing, Business Development and Partners got up next and talked about how HDS has been delivering on their strategic vision for the last decade or so.    With HUS VM, HDS has moved storage virtualization down market, into a rack mounted 5U storage subsystem.

Brian mentioned that 70% of their customers are now storage virtualized (meaning that they have external storage managed by VSP, HUS VM or prior versions).  This is phenomenal seeing as how only a couple of years back this number was closer to 25%.  Later at lunch I probed as to what HDS thought was the reason for this rapid adoption, but the only explanation was the standard S-curve adoption rate for new technologies.

Brian talked about some big data applications where HDS and Hitachi Ltd, business units collaborate to provide business solutions. He mentioned the London Summer Olympics sensor analytics, medical imaging analytics, and heavy construction equipment analytics. Another example he mentioned was financial analysis firms usingsatellite images of retail parking lots to predict retail revenue growth or loss.  HDS’s big data strategy seems to be vertically focused building on the strength in Hitachi Ltd’s portfolio of technologies. This was the subject of a post-lunch discussion between John Webster of Evaluator group, myself and Brian.

Brian talked about their storage economics professional services engagement. HDS has done over 1200 storage economics engagements and  have written books on the topic as well as have iPad apps to support it.  In addition, Brian mentioned that in a late The Info Pro survey, HDS was rated number 1 in value for storage products.

Brian talked some about HDS strategic planning frameworks one of which was an approach to identify investments to maximize share of IT spend across various market segments.  Since 2003 when HDS was 80% hardware revenue company to today where they are over 50% Software and Services revenue they seem to have broaden their portfolio extensively.

John Mansfield, EVP Global Solutions Strategy and Development and Sean Moser, VP Software Platforms Product Management spoke next and talked about HCP and HNAS integration over time. It was just 13 months ago that HDS acquired BlueArc and today they have integrated BlueArc technology into HUS VM and HUS storage systems (it was already the guts of HNAS).

They also talked about the success HDS is having with HCP their content platform. One bank they are working with plans to have 80% of their data in an HCP object store.

In addition there was a lot of discussion on UCP Pro and UCP Select, HDS’s converged server, storage and networking systems for VMware environments. With UCP Pro the whole package is ordered as a single SKU. In contrast, with UCP Select partners can order different components and put it together themselves.  HDS had a demo of their UCP Pro orchestration software under VMware vSphere 5.1 vCenter that allowed VMware admins to completely provision, manage and monitor servers, storage and networking for their converged infrastructure.

They also talked about their new Hitachi Accelerated Flash storage which is an implementation of a Flash JBOD using MLC NAND but with extensive Hitachi/HDS intellectual property. Together with VSP microcode changes, the new flash JBOD provides great performance (1 Million IOPS) in a standard rack.  The technology was developed specifically by Hitachi for HDS storage systems.

Mike Walkey SVP Global Partners and Alliances got up next and talked about their vertical oriented channel strategy.  HDS is looking for channel partners perspective the questions that can expand their reach to new markets, providing services along with the equipment and that can make a difference to these markets.  They have been spending more time and money on vertical shows such as VMworld, SAPhire, etc. rather than horizontal storage shows (such as SNW). Mike mentioned key high level partnerships with Microsoft, VMware, Oracle, and SAP as helping to drive solutions into these markets.

Hicham Abhessamad, SVP, Global Services got up next and talked about the level of excellence available from HDS services.  He indicated that professional services grew by 34% y/y while managed services grew 114% y/y.  He related a McKinsey study that showed that IT budget priorities will change over the next couple of years away from pure infrastructure to more analytics and collaboration.  Hicham talked about a couple of large installations of HDS storage and what they are doing with it.

There were a few sessions of one on ones with HDS executives and couple of other speakers later in the day mainly on NDA topics.  That’s about all I took notes on.  I was losing steam toward the end of the day.

Comments?

IBM’s 120PB storage system

Susitna Glacier, Alaska by NASA Goddard Photo and Video (cc) (from Flickr)
Susitna Glacier, Alaska by NASA Goddard Photo and Video (cc) (from Flickr)

Talk about big data, Technology Review reported this week that IBM is building a 120PB storage system for some unnamed customer.  Details are sketchy and I cannot seem to find any announcement of this on IBM.com.

Hardware

It appears that the system uses 200K disk drives to support the 120PB of storage.  The disk drives are packed in a new wider rack and are water cooled.  According to the news report the new wider drive trays hold more drives than current drive trays available on the market.

For instance, HP has a hot pluggable, 100 SFF (small form factor 2.5″) disk enclosure that sits in 3U of standard rack space.  200K SFF disks would take up about 154 full racks, not counting the interconnect switching that would be required.  Unclear whether water cooling would increase the density much but I suppose a wider tray with special cooling might get you more drives per floor tile.

There was no mention of interconnect, but today’s drives use either SAS or SATA.  SAS interconnects for 200K drives would require many separate SAS busses. With an SAS expander addressing 255 drives or other expanders, one would need at least 4 SAS busses but this would have ~64K drives per bus and would not perform well.  Something more like 64-128 drives per bus would have much better performer and each drive would need dual pathing, and if we use 100 drives per SAS string, that’s 2000 SAS drive strings or at least 4000 SAS busses (dual port access to the drives).

The report mentioned GPFS as the underlying software which supports three cluster types today:

  • Shared storage cluster – where GPFS front end nodes access shared storage across the backend. This is generally SAN storage system(s).  But the requirements for high density, it doesn’t seem likely that the 120PB storage system uses SAN storage in the backend.
  • Networked based cluster – here the GPFS front end nodes talk over a LAN to a cluster of NSD (network storage director?) servers which can have access to all or some of the storage. My guess is this is what will be used in the 120PB storage system
  • Shared Network based clusters – this looks just like a bunch of NSD servers but provides access across multiple NSD clusters.

Given the above, with ~100 drives per NSD server means another 1U extra per 100 drives or (given HP drive density) 4U per 100 drives for 1000 drives and 10 IO servers per 40U rack, (not counting switching).  At this density it takes ~200 racks for 120PB of raw storage and NSD nodes or 2000 NSD nodes.

Unclear how many GPFS front end nodes would be needed on top of this but even if it were 1 GPFS frontend node for every 5 NSD nodes, we are talking another 400 GPFS frontend nodes and at 1U per server, another 10 racks or so (not counting switching).

If my calculations are correct we are talking over 210 racks with switching thrown in to support the storage.  According to IBM’s discussion on the Storage challenges for petascale systems, it probably provides ~6TB/sec of data transfer which should be easy with 200K disks but may require even more SAS busses (maybe ~10K vs. the 2K discussed above).

Software

IBM GPFS is used behind the scenes in IBM’s commercial SONAS storage system but has been around as a cluster file system designed for HPC environments for over 15 years or more now.

Given this many disk drives something needs to be done about protecting against drive failure.  IBM has been talking about declustered RAID algorithms for their next generation HPC storage system which spreads the parity across more disks and as such, speeds up rebuild time at the cost of reducing effective capacity. There was no mention of effective capacity in the report but this would be a reasonable tradeoff.  A 200K drive storage system should have a drive failure every 10 hours, on average (assuming a 2 million hour MTBF).  Let’s hope they get drive rebuild time down much below that.

The system is expected to hold around a trillion files.  Not sure but even at 1024 bytes of metadata per file, this number of files would chew up ~1PB of metadata storage space.

GPFS provides ILM (information life cycle management, or data placement based on information attributes) using automated policies and supports external storage pools outside the GPFS cluster storage.  ILM within the GPFS cluster supports file placement across different tiers of storage.

All the discussion up to now revolved around homogeneous backend storage but it’s quite possible that multiple storage tiers could also be used.  For example, a high density but slower storage tier could be combined with a low density but faster storage tier to provide a more cost effective storage system.  Although, it’s unclear whether the application (real world modeling) could readily utilize this sort of storage architecture nor whether they would care about system cost.

Nonetheless, presumably an external storage pool would be a useful adjunct to any 120PB storage system for HPC applications.

Can it be done?

Let’s see, 400 GPFS nodes, 2000 NSD nodes, and 200K drives. Seems like the hardware would be readily doable (not sure why they needed watercooling but hopefully they obtained better drive density that way).

Luckily GPFS supports Infiniband which can support 10,000 nodes within a single subnet.  Thus an Infiniband interconnect between the GPFS and NSD nodes could easily support a 2400 node cluster.

The only real question is can a GPFS software system handle 2000 NSD nodes and 400 GPFS nodes with trillions of files over 120PB of raw storage.

As a comparison here are some recent examples of scale out NAS systems:

It would seem that a 20X multiplier times a current Isilon cluster or even a 10X multiple of a currently supported SONAS system would take some software effort to work together, but seems entirely within reason.

On the other hand, Yahoo supports a 4000-node Hadoop cluster and seems to work just fine.  So from a feasability perspective, a 2500 node GPFS-NSD node system seems just a walk in the park for Hadoop.

Of course, IBM Almaden is working on project to support Hadoop over GPFS which might not be optimum for real world modeling but would nonetheless support the node count being talked about here.

——

I wish there was some real technical information on the project out on the web but I could not find any. Much of this is informed conjecture based on current GPFS system and storage hardware capabilities. But hopefully, I haven’t traveled to far astray.

Comments?

 

Potential data loss using SSD RAID groups

Ultrastar SSD400 4 (c) 2011 Hitachi Global Storage Technologies (from their website)
Ultrastar SSD400 4 (c) 2011 Hitachi Global Storage Technologies (from their website)

The problem with SSDs is that they typically all fail at some level of data writes, called the write endurance specification.

As such, if you purchase multiple drives from the same vendor and put them in a RAID group, sometimes this can cause multiple failures because of this.

Say the SSD write endurance is 250TBs (you can only write 250TB to the SSD before write failure), and you populate a RAID 5 group with them in a 3 -data drives + 1-parity drive configuration.  As, it’s RAID 5, parity rotates around to each of the drives sort of equalizing the parity write activity.

Now every write to the RAID group is actually two writes, one for data and one for parity.  Thus, the 250TB of write endurance per SSD, which should result in 1000TB write endurance for the RAID group is reduced to something more like 125TB*4 or 500TB.  Specifically,

  • Each write to a RAID 5 data drive is replicated to the RAID 5 parity drive,
  • As each parity write is written to a different drive, the parity drive of the moment can contain at most 125TB of data writes and 125TB of parity writes before it uses up it’s write endurance spec.
  • So for the 4 drive raid group we can write 500TB of data, evenly spread across the group can no longer be written.

As for RAID 6, it almost looks the same except that you lose more SSD life, as you write parity twice.  E.g. for a 6 data drive + 2 parity drive RAID 6 group, with similar write endurance, you should get 83.3TB of data writes and 166.7TB of parity writes per drive. Which for an 8 drive parity group is 666.4TB of data writes before RAID group write endurance lifetime is used up.

For RAID 1 with 2 SSDs in the group, as each drive mirrors writes to the other drive, you can only get 125TB per drive or 250TB total data writes per RAID group.

But the “real” problem is much worse

If I am writing the last TB to my RAID group and if I have managed to spread the data writes evenly across the RAID group, one drive will go out right away.  Most likely the current parity drive will throw a write error. BUT the real problem occurs during the rebuild.

  • With a 256GB SSD in the RAID 5 group, with 100MB/s read rate, reading the 3 drives in parallel to rebuild the fourth will take ~43 minutes.  However that means all the good SSDs are idle except for rebuild IO.  Most systems limit drive rebuild IO to no more than 1/2 to 1/4 of the drive activity (possibly much less) in the RAID group.  As such, a more realistic rebuild time can be anywhere from 86 to 169 minutes or more.
  • Now because rebuild time takes a long time, data must continue to be written to the RAID group.   But as we are aware, most of the remaining drives in the RAID group are likely to be at the end of their write endurance already.
  • Thus, it’s quite possible that another SSD in the RAID group will fail while the first drive is rebuilt.

Resulting in a catastrophic data loss (2 bad drives in a RAID 5, 3 drives in a RAID 6 group).

RAID 1 groups with SSDs are probably even more prone to this issue. When the first drive fail, the second should follow closely behind.

Yes, but is this probable

First we are talking TBs of data here. The likelihood that a RAID groups worth of drives would all have the same amount of data written to them even within a matter of hours of rebuild time is somewhat unlikely. That being said, the lower the write endurance of the drives, the more equal the SSD write endurance at the creation of the RAID group, and the longer it takes to rebuild failing SSDs, the higher the probability of this type of castastrophic failure.

In any case, the problem is highly likely to occur with RAID 1 groups using similar SSDs as the drives are always written in pairs.

But for RAID 5 or 6, it all depends on how well data striping across the RAID group equalizes data written to the drives.

For hard disks this was a good thing and customers or storage systems all tried to equalize IO activity across drives in a RAID group. So with good (manual or automated) data striping the problem is more likely.

Automated storage tiering using SSDs is not as easy to fathom with respect to write endurance catastrophes.  Here a storage system automatically moves the hottest data (highest IO activity) to SSDs and the coldest data down to hard disks.  In this fashion, they eliminate any manual tuning activity but they also attempt to minimize any skew to the workload across the SSDs. Thus, automated storage tiering, if it works well, should tend to spread the IO workload across all the SSDs in the highest tier, resulting in similar multi-SSD drive failures.

However, with some vendor’s automated storage tiering, the data is actually copied and not moved (that is the data resides both on disk and  SSD).  In this scenario losing an SSD RAID group or two might severely constrain performance, but does not result in data loss.   It’s hard to tell which vendors do which but customers can should be able to find out.

So what’s an SSD user to do

  • Using RAID 4 for SSDs seems to make sense.  The reason we went to RAID 5 and 6 was to avoid hot (parity write) drive(s) but with SSD speeds, having a hot parity drive or two is probably not a problem. (Some debate on this, we may lose some SSD performance by doing this…). Of course the RAID 4 parity drive will die very soon, but paradoxically having a skewed workload within the RAID group will increase SSD data availability.
  • Mixing SSDs age within RAID groups as much as possible. That way a single data load level will not impact multiple drives.
  • Turning off LUN data striping within a SSD RAID group so data IO can be more skewed.
  • Monitoring write endurance levels for your SSDs, so you can proactively replace them long before they will fail
  • Keeping good backups and/or replicas of SSD data.

I learned the other day that most enterprise SSDs provide some sort of write endurance meter that can be seen at least at the drive level.  I would suggest that all storage vendors make this sort of information widely available in their management interfaces.  Sophisticated vendors could use such information to analyze the SSDs being used for a RAID group and suggest which SSDs to use to maximize data availability.

But in any event, for now at least, I would avoid RAID 1 using SSDs.

Comments?