IBM’s 120PB storage system

Susitna Glacier, Alaska by NASA Goddard Photo and Video (cc) (from Flickr)
Susitna Glacier, Alaska by NASA Goddard Photo and Video (cc) (from Flickr)

Talk about big data, Technology Review reported this week that IBM is building a 120PB storage system for some unnamed customer.  Details are sketchy and I cannot seem to find any announcement of this on IBM.com.

Hardware

It appears that the system uses 200K disk drives to support the 120PB of storage.  The disk drives are packed in a new wider rack and are water cooled.  According to the news report the new wider drive trays hold more drives than current drive trays available on the market.

For instance, HP has a hot pluggable, 100 SFF (small form factor 2.5″) disk enclosure that sits in 3U of standard rack space.  200K SFF disks would take up about 154 full racks, not counting the interconnect switching that would be required.  Unclear whether water cooling would increase the density much but I suppose a wider tray with special cooling might get you more drives per floor tile.

There was no mention of interconnect, but today’s drives use either SAS or SATA.  SAS interconnects for 200K drives would require many separate SAS busses. With an SAS expander addressing 255 drives or other expanders, one would need at least 4 SAS busses but this would have ~64K drives per bus and would not perform well.  Something more like 64-128 drives per bus would have much better performer and each drive would need dual pathing, and if we use 100 drives per SAS string, that’s 2000 SAS drive strings or at least 4000 SAS busses (dual port access to the drives).

The report mentioned GPFS as the underlying software which supports three cluster types today:

  • Shared storage cluster – where GPFS front end nodes access shared storage across the backend. This is generally SAN storage system(s).  But the requirements for high density, it doesn’t seem likely that the 120PB storage system uses SAN storage in the backend.
  • Networked based cluster – here the GPFS front end nodes talk over a LAN to a cluster of NSD (network storage director?) servers which can have access to all or some of the storage. My guess is this is what will be used in the 120PB storage system
  • Shared Network based clusters – this looks just like a bunch of NSD servers but provides access across multiple NSD clusters.

Given the above, with ~100 drives per NSD server means another 1U extra per 100 drives or (given HP drive density) 4U per 100 drives for 1000 drives and 10 IO servers per 40U rack, (not counting switching).  At this density it takes ~200 racks for 120PB of raw storage and NSD nodes or 2000 NSD nodes.

Unclear how many GPFS front end nodes would be needed on top of this but even if it were 1 GPFS frontend node for every 5 NSD nodes, we are talking another 400 GPFS frontend nodes and at 1U per server, another 10 racks or so (not counting switching).

If my calculations are correct we are talking over 210 racks with switching thrown in to support the storage.  According to IBM’s discussion on the Storage challenges for petascale systems, it probably provides ~6TB/sec of data transfer which should be easy with 200K disks but may require even more SAS busses (maybe ~10K vs. the 2K discussed above).

Software

IBM GPFS is used behind the scenes in IBM’s commercial SONAS storage system but has been around as a cluster file system designed for HPC environments for over 15 years or more now.

Given this many disk drives something needs to be done about protecting against drive failure.  IBM has been talking about declustered RAID algorithms for their next generation HPC storage system which spreads the parity across more disks and as such, speeds up rebuild time at the cost of reducing effective capacity. There was no mention of effective capacity in the report but this would be a reasonable tradeoff.  A 200K drive storage system should have a drive failure every 10 hours, on average (assuming a 2 million hour MTBF).  Let’s hope they get drive rebuild time down much below that.

The system is expected to hold around a trillion files.  Not sure but even at 1024 bytes of metadata per file, this number of files would chew up ~1PB of metadata storage space.

GPFS provides ILM (information life cycle management, or data placement based on information attributes) using automated policies and supports external storage pools outside the GPFS cluster storage.  ILM within the GPFS cluster supports file placement across different tiers of storage.

All the discussion up to now revolved around homogeneous backend storage but it’s quite possible that multiple storage tiers could also be used.  For example, a high density but slower storage tier could be combined with a low density but faster storage tier to provide a more cost effective storage system.  Although, it’s unclear whether the application (real world modeling) could readily utilize this sort of storage architecture nor whether they would care about system cost.

Nonetheless, presumably an external storage pool would be a useful adjunct to any 120PB storage system for HPC applications.

Can it be done?

Let’s see, 400 GPFS nodes, 2000 NSD nodes, and 200K drives. Seems like the hardware would be readily doable (not sure why they needed watercooling but hopefully they obtained better drive density that way).

Luckily GPFS supports Infiniband which can support 10,000 nodes within a single subnet.  Thus an Infiniband interconnect between the GPFS and NSD nodes could easily support a 2400 node cluster.

The only real question is can a GPFS software system handle 2000 NSD nodes and 400 GPFS nodes with trillions of files over 120PB of raw storage.

As a comparison here are some recent examples of scale out NAS systems:

It would seem that a 20X multiplier times a current Isilon cluster or even a 10X multiple of a currently supported SONAS system would take some software effort to work together, but seems entirely within reason.

On the other hand, Yahoo supports a 4000-node Hadoop cluster and seems to work just fine.  So from a feasability perspective, a 2500 node GPFS-NSD node system seems just a walk in the park for Hadoop.

Of course, IBM Almaden is working on project to support Hadoop over GPFS which might not be optimum for real world modeling but would nonetheless support the node count being talked about here.

——

I wish there was some real technical information on the project out on the web but I could not find any. Much of this is informed conjecture based on current GPFS system and storage hardware capabilities. But hopefully, I haven’t traveled to far astray.

Comments?

 

EMCWorld day 3 …

Sometime this week EMC announced a new generation of Isilon NearLine storage which now includes HGST 3TB SATA disk drives.  With the new capacity the multi-node (144) Isilon cluster using the 108NL nodes can support 15PB of file data in a single file system.

Some of the booths along the walk to the solutions pavilion highlight EMC innovation winners. Two that caught my interest included:

  • Constellation computing – not quite sure how to define this but it’s distributed computing along with distributed data creation.  The intent is to move the data processing to the source of the data creation and keep the data there.  This might be very useful for applications that have many data sources and where data processing capabilities can be moved out to the nodes where the data was created. Seems highly scaleable but may depend on the ability to carve up the processing to work on the local data. I can see where compression, encryption, indexing and some statistical summarization can be done at the data creation site before it’s sent elsewhere. Sort of like both a sensor mesh with a processing nodes attached to the sensors configured as a sensor-proccessing grid.  Only one thing concerned me, there didn’t seem to be any central repository or control to this computing environment.  Probably what they intended, as the distributed solution is more adaptable and more scaleable than a centrally controlled environment.
  • Developing world healthcare cloud – seemed to be all about delivering healthcare to the bottom of the pyramid.  They won EMC’s social innovation award and are working with a group in Rwanda to try to provide better healthcare to remote villages.  It’s built around OpenMRS as a backend medical record archive hosted on EMC DC powered Iomega NAS storage and uses Google’s OpenDataKit to work with the data on mobile and laptop devices.  They showed a mobile phone which could be used to create, record and retrieve healthcare information (OpenMRS records) remotely and upload it sometime later when in range of a cell tower.  The solution also supports the download of a portion of the medical center’s health record database (e.g., a “cohort” slice, think a village’s healthcare records) onto a laptop, usable offline by a healthcare provider to update and record  patient health changes onsite and remotely.  Pulling all the technology together and delivering this as an application stack usable on mobile and laptop devices with minimal IT sophistication, storage and remote/mobile access are where the challenges lie.

Went to Sanjay’s (EMC’s CIO) keynote on EMC IT’s journey to IT-as-a-Service. As you can imagine it makes extensive use of VMware’s vSphere, vCloud, and vShield capabilities primarily in a private cloud infrastructure but they seem agnostic to a build-it or buy-it approach. EMC is about 75% virtualized today, and are starting to see significant and tangible OpEx and energy savings. They designed their North Carolina data center around the vCloud architecture and now are offering business users self service portals to provision VMs and business services…

Only caught the first section of BJ’s (President of BRS) keynote but he said recent analyst data (think IDC?) said that EMC was the overall leader (>64% market share) in purpose built backup appliances (Data Domain, Disk Library, Avamar data stores, etc.).  Too bad I had to step out but he looked like he was on a roll.

Comments?

EMCWorld news Day1 1st half

EMC World keynote stage, storage, vblocks, and cloud...
EMC World keynote stage, storage, vblocks, and cloud...

EMC announced today a couple of new twists on the flash/SSD storage end of the product spectrum.  Specifically,

  • They now support all flash/no-disk storage systems. Apparently they have been getting requests to eliminate disk storage altogether. Probably government IT but maybe some high-end enterprise customers with low-power, high performance requirements.
  • They are going to roll out enterprise MLC flash.  It’s unclear when it will  be released but it’s coming soon, different price curve, different longevity (maybe), but brings down the cost of flash by ~2X.
  • EMC is going to start selling server side Flash.  Using storage FAST like caching algorithms to knit the storage to the server side Flash.  Unclear what server Flash they will be using but it sounds a lot like a Fusion-IO type of product.  How well the server cache and the storage cache talks is another matter.  Chuck Hollis said EMC decided to redraw the boundary between storage and server and now there is a dotted line that spans the SAN/NAS boundary and carves out a piece of the server which is sort of on server caching.

Interesting to say the least.  How well it’s tied to the rest of the FAST suite is critical. What happens when one or the other loses power, as Flash is non-volatile no data would be lost but the currency of the data for shared storage may be another question.  Also having multiple servers in the environment may require cache coherence across the servers and storage participating in this data network!?

Some teaser announcements from Joe’s keynote:

  • VPLEX asynchronous, active active supporting two datacenter access to the same data over 1700Km away Pittsburgh to Dallas.
  • New Isilon record scalability and capacity the NL appliance. Can now support a 15PB file system, with trillions of files in it.  One gene sequencer says a typical assay generates 500M objects/files…
  • Embracing Hadoop open source products so that EMC will support Hadoop distro in an appliance or software only solution

Pat G also showed EMC Greenplum appliance searching a 8B row database to find out how many products have been shipped to a specific zip code…