MCS, UltraDIMMs and memory IO, the new path ahead – part 2

IMG_2337In part 1 (see previous post here), we discussed the underlying technology for SanDisk‘s UltraDIMMs based on Diablo Technologies MCS hardware and software. IBM will be shipping UltraDIMMs in their high end servers later this year as their new eXFlash.

In this segment we will discuss what SanDisk has put on top of the Diablo Technology’s MCS to supply SSD storage.

SanDisk UltraDIMM SSD storage

In the UltraDIMM package, SanDisk supports 200 or 400GB of 19nm MLC NAND SSD storage that is accessed via SATA [corrected after this went out, Ed.] internally, but the main interface is the 1600MHz, DDR3 to the UltraDIMMs.  As each UltraDIMM card plugs into any DDR3 memory slot you can potentially support multiples of these cards in a single server. I believe the maximum number is 7 UltraDIMMs, not sure if IBM supports this many [corrected after this went out, Ed.] dependent on the number of memory slots in your server. IBM on their x3850 and x3950 can support up to 32 UltraDIMMs per server.

SanDisk uses their Guardian Technology to enhance NAND endurance beyond what’s possible with native NAND controllers. One of the things that Guardian Technology does is to vary the voltage used to program the NAND bits over the life of the bit cells/pages. So early on when the cell is fresh, they can use less voltage and as it ages they increase the voltage to insure that the bits are properly programmed. With other NAND controllers, using the same voltage across the whole NAND lifetime it will unduly stress the NAND bits early on and later as they age, it will be unable to program properly and will need to be flagged as bad.  The NAND chips/bits are characterized so that SanDisk Guardian Technology can use an optimum voltage curve over the chips lifetime.

The UltraDIMMs also have powerloss protection. This means that any write to an UltraDIMM memory that’s been acknowledged to the server is guaranteed to have sufficient power to make it all the way to the SSD storage.

Another thing that MCS memory interface brings to the picture is Error Correction Circuitry (ECC). Data written to UltraDIMMs has ECC protection throughout the data path up from the server DRAM memory, through the DIMM socket, all the way to the SSD flash.

As discussed extensively in Part 1 of this post, access times for UltraDIMM storage is on the order 7µsec, which is ~7X faster than best of class PCIe Flash storage and a single UltraDIMM card is capable of sustaining 20GB/second of data throughput. I know of enterprise class storage systems that can’t do half that in throughput.

On the other hand, one problem with UltraDIMM storage is that they are not hot swappable. This is primarily a memory interface problem and not an UltraDIMM issue but nonetheless, you can’t swap an UltraDIMM module until the server is powered down. And who would want to do such a thing when the server is powered anyway?

SanDisk long history in NAND

SanDisk1 SanDisk2 SanDisk3As you can see from the three photos at right SanDisk seems to have been involved in flash/NAND technology innovation since the early 1990’s.  At the time NOR and NAND were competing for almost the same market.

But sometime in the mid to late 1990’s NAND found a niche in consumer cameras and never looked back. Not sure where NOR marketis today but it’s a drop in the bucket compared to the NAND market

UltraDIMMs is just the latest platform to support NAND storage access.  It happens to be one with blazingly fast access times and high IO parallelism, but in the end it just represents another way to obtain the benefits of NAND for IT customers.

Also, SanDisk’s commercial NAND (Memory Card) business seems to be very healthy. What with higher resolution photos/video/audio coming online over the next decade or so it doesn’t seem to be going away anytime soon.

SanDisk is in a new joint venture (JV) with Toshiba to produce 3D NAND flash. But in the mean time they are still using 2D flash for their current SSD storage. Toshiba and SanDisk in their current JV together manufacture about 1/2 the NAND bits in the world today.

The rest of SanDisk NAND business also seem to be doing well. And the aforementioned JV with Toshiba on 3D NAND looks positioned to take all of this NAND to the next level of density as well which should make all of us happy.

SanDisk acquiring FusionIO

SanDisk was in the news lately as they have recently filed to acquire FusionIO, a prominent and early PCIe flash supplier that in recent years has broadened their portfolio to include enterprise storage with their acquisition of NexGen storage (renamed IO Control).

When FusionIO IPO’d the stock sold at ~$19/share and SanDisk is purchasing the company in an all cash deal for $11.25/share almost a 40% reduction in share price in 3 years (June’11 IPO) – ouch.  At IPO the company was valued at ~$2B, (some pundits said this was ~$1.5B, so there’s some debate on the original valuation). SanDisk is buying the company for ~$1.1B in cash. Any way you look at it, they paid significantly less than what the company was worth at IPO. Granted, it was valued at 41X earnings then and its recent stock price at $11.59 represents a 3.3P/E (ttm).

Not exactly certain what happened. Analysts seem to indicate that Apple and Facebook, FusionIO’s biggest customers were buying less FusionIO product. I also happen to think that the PCIe flash space has gotten pretty crowded over the last 3 years with entrants from Micron Technologies, Intel, LSI, Verident/Western Digital, and others.

In addition, for PCIe flash to broaden its market there’s a serious need to surround it with sophisticated caching software to enable a more general purpose IO solution (see Pernix Data, Proximal Data, and others). These general purpose, caching solutions have finally reached high levels of sophistication and just now are becoming more widely available.

~~~~

Originally, part 3 of this series was going to be on IBM’s release of the UltraDIMM technology  as their new eXFlash. However, I am somewhat surprised not to see other vendors taking up the MCS/UltraDIMM technology but IBM may have a limited exclusivity to it.

The only other thing thats this interesting happening in solid state storage is HP’s Memristor Machine which is still a ways off.

Nonetheless, a new much faster memory card based SSD is hitting the market and if history is any indication, it won’t be long until the data storage world will sit up and take notice.

Comments?

Veeam’s upcoming V8 virtues

[Not] Vamoosing VMworld

We were at Storage Field Day 5 (SFD5, see the videos here) last month and had a briefing on Veeam’s upcoming V8 release.

They also told us (news to me) that they are leaving VMworld[I sit corrected, I have been informed after this went to press that Veeam is not leaving VMworld2014, and never said anything about it at the session – My mistake and I take full responsibility, sorry for any confusion] (sigh, now who’s going to have THE after conference, KILLER PARTY at VMworld) and moving to [but they did say they are definitely starting up] their own VeeamON conference at The Cosmopolitan in Las Vegas on October 6,7 & 8 this year. If their VMworld parties are any indication, the conference in the Cosmo should be a fun and rewarding time for all. Pre-registration is open and they have a call out for papers.

Doug Hazelman (@VMDoug), Rick Vanover (@RickVanover) and Luca Dell’Oca (@dellock6) all presented although Luca’s session was under strict NDA to be revealed later. I think sometime later this summer.

Doug mentioned that after 6 years, Veeam now has over 100,000 customers world wide.  One of their more popular, early innovations was the ability to run a VM directly off of a backup and sometime over the past couple of years they have moved from a VMware only backup & replication solution to also supporting Microsoft Hyper-V (more news to me).

V8’s virtues

Veeam V8 will add some interesting capabilities to the Veeam product solutions:

  • (VMware only) Built-in backups from storage snapshots – (Enterprise Plus edition only) Backup from VMware snapshots can sometimes impact app performance, especially when it comes time to commit changes. But with V7, Veeam now offers backup utilizing VMware’s Change Block Tracking (CBT)and taking backups from storage snapshots directly for HP 3PAR StoreServ, HP (Lefthand) StoreVirtual/StoreVirtual VSA and in soon to be available V8, NetApp FAS (Data ONTAP 8.1 or above, 7- or cluster-mode, clones too) storage systems. First Veeam does its application level processing (under Windows Server does VSS operations), after that completes tells VMware to take (a VMware) snapshot, when that completes they tell the storage to take a (storage) snapshot, when that completes they release the VMware snapshot. What all this does is allows them to utilize VMware CBT as well as storage snapshots which makes it up to 20 times faster than normal VMware snapshot backups. This way they can backup directly from the storage snapshot using the Veeam proxy. Also because the VMware snapshot is so short lived there is little overhead for committing any changes.  Also there is no need to use a proxy ESX server to do this, i.e., promote the VMware snapshot to a LUN, add it to an ESX, resignature, add the VM, and do all the backups, which, of course destroys CBT. This works for FC, iSCSI and NFS data stores. With NetApp storage you can also take the (VSS) application consistent snapshot and copy it to SnapVault.
  • Veeam Explorer (recovery) for storage snapshots – (Free backup edition) Recovery from (HP in V7 & NetApp in V8) storage snapshots is yet another feature and provides item (e.g., emails, contacts, email folders for Exchange), granular (VM level or file level) or full (volume) recovery from storage based snapshots, regardless of how those storage snapshots were created.
  • Veeam Explorer for SQL Server (V8 only) – (unsure what license is required) Similar to the Explorer for snapshots discussed above, this would allow a Veeam admin to do item level recovery for an SQL database. This also includes recovery from Veeam Backup repositories as well as storage snapshots. But this means that you could restore a ROW of an SQL table, an SQL TABLE as well as a whole SQL database. Now DBAs always had these sorts of abilities which required using Log services. But allowing a Veeam admin to do these sorts of activities seems like putting a gun in the hands of a child (or maybe a bazooka in the hands of an untrained civilian).
  • Veeam Explorer for Active Directory (V8 only) – (unsure what license is required) You’ve seen whats’ available above and just consider these same capabilities only applied to active directory. This means you can restore a password hash, user, group or organizational unit (OU). I don’t know about you but this seems more akin to a howitzer in the hands of a civilian.

They showed an example of competitive situation where running V8 (in beta?) with NetApp backups using snapshots versus some unnamed competition. They were able to complete a full backup in 1/4 the time of their competition (2hrs. vs. 8hrs.) and completed incremental backups in 35min. vs. 2hrs. for the competition.

“Thar be dragons there …”

Ok, maybe I am a little more paranoid than the average IT guy/gal. But in my (old world, greybeards) view, SQL databases belong in the realm of DBAs and Active Directory databases belong to domain controller admins. Messing around with production versions of SQL DBs or AD DBs seems hazardous to a data centers health. We’re not just talking files anymore here guys.

In Veeam’s defense, these new Explorer recovery tools are only probably going to be used to do something that needs to be done right away, to get things back operating again, and would not be used unless there’s a real need/emergency to do so. Otherwise let the DBA and security admins do it with their log recovery tools.  And another thing, they have had similar capabilities for Exchange emails, folders, contacts, etc. and no ones shot their foot off yet so why the concern.

Nonetheless, I feel strongly that these tools ought to be placed under lock and key and the key put in a safe with the combination under a glass case labeled IN CASE OF EMERGENCY, BREAK GLASS.

Comments.

Peak server, the cloud & NetApp storage for AWS

I was at a conference a month or so ago and one speaker mentioned that the number of x86 servers being sold has peaked and is dropping. I can imagine a number of reasons for this and the main one being server virtualization. But this speaker had a different view and it seemed to be the cloud.

Peak server is here.

He said that three companies were purchasing over 1/2 the x86 servers these days. I feel that there should be at least four Google, Facebook, Amazon & Microsoft and maybe five, if you add in Apple.

Something has happened over the past year or so. Enterprise IT has continued along its merry way but the adoption of cloud services is starting to take off.

I have seen this before, with mainframes, then mini-computers, and now client-server. Minicomputers came out and were so easy to use and develop/deploy applications on, that people stopped creating new apps on the mainframe. Mainframes never died out, and probably have never really stopped shipping increasing MIPS every year. But the share of WW MIP installations for mainframes has been shrinking for decades and have never got going again.

Ultimately, the proprietary minicomputer was just a passing fad and only lasted about 25 years or so. It was wounded by the PC, and then killed off by proprietary Unix workstations.

Then it happened again, the new upstart this time was Windows Server and Linux. Once again it was just easier to build apps on these new and cheaper servers, than any of the older Unix servers. Of course there’s still plenty of business in proprietary Unix servers, but again I would venture to say that their share of WW installed MIPS has been shrinking for a long time.

Nowadays, the cloud is mortally wounding the server market. Server virtualization is helping a lot but it’s also enabling the cloud to eliminate many physical server sales. This is because new applications, new IT environments are being ported/moved/deployed onto the cloud.

Peak server means less enterprise networking, storage and server hardware

In this new, cloud world, customers need less servers, less networking and less enterprise class storage. Yes not every application is suitable to cloud deployment but that’s why there’s still mainframes, still Unix servers, and a continuing need for standalone, physical or virtual x86 servers in the enterprise. But their share of MIPs will start shrinking soon if it hasn’t already.

Ok, so enterprise data center share of MIPs will start shrinking vis a vis cloud data centers. But what happens to networking and storage. My view is that networking becomes software defined and there’s a component of that which operates on special purpose hardware. This will increase in shipments but the more complex, enterprise class networking equipment will flatline and never see any more substantial growth.

And up until yesterday I felt much the same about enterprise class storage. Software defined storage in my future, DAS and SSDs for the capacity and the smarts exist in software if at all. Today, most of the cloud and many service providers have been moving off enterprise class storage and onto DAS.

NetApp’s new enterprise storage in AWS

But yesterday I heard about NetApp private storage for the cloud. This is a configuration of NetApp storage installed in a CoLo facility with a “direct connection” to Amazon compute cloud. In this way, enterprise customers can maintain data stewardship/ownership/governance over their data while at the same time deploying applications onto AWS compute cloud.

This seems to be one of the sticking points to enterprise customers adopting the cloud. By having (data) storage owned lock/stock&barrel by the enterprise it seems much easier and less risky to deploy new and old applications to the cloud.

Whether this pans out and can provide enough value to cover the added expense of the enterprise class storage, only the market can decide. But this is the first time I can remember, where any vendor has articulated a role for enterprise class storage in the cloud. Let’s hope it works.

Image: PDP8/s by ajmexico

EU vs. US on data protection

Prison Planet by AZRainman (cc) (from Flickr)
Prison Planet by AZRainman (cc) (from Flickr)

Last year I was at SNW and talking to a storage admin from a large, international company who mentioned how data protection policies in EU were forcing them to limit where data gets copied and replicated.  Some of their problem was due to different countries having dissimilar legislation regarding data privacy and protection.

However, their real concern was how to effectively and automatically sanitize this information. It seems they would like to analyze it off shore but still adhere to EU country’s data protection legislation.

Recently, there has been more discussions in the EU about data protection requirement (See NY Times post on Consumer Data Protection Laws, an Ocean Apart and the Ars Technica post Proposed EU data protection reform could start a “trade war”).  It seems, EU proposals are becoming even more at odds with current US data protection environment.

Compartmentalized US data privacy

In the US, data protection seems much more compartmentalized and decentralized. We have data protection for health care information, video rentals, credit reports, etc. Each with their own provisions and protection regime.

This allows companies in different markets pretty much internal control over what they do with customer information but tightly regulates what happens with the data as it moves outside that environment.

Within such an data protection regime an internet company can gather all the information they want on a person’s interaction with their web services and that way better target services and advertising for the user.

EU’s broader data protection regime

In contrast, EU countries have a much broader regime in place that covers any and all personal information.  The EU wants to ultimately control how much information can be gathered by a company about what a person does online and provide an expunge on demand capability directly to the individual.

EU’s proposed new rules would standardize data privacy rules across the 27 country region but would also strengthen them in the process.  Doing so, would make it much harder to personalize services and the presumption is that the internet companies trying to do so would not make as much revenue in the EU because of this.

Although US companies and government officials have been lobbying heavily to change the new proposals it appears to be backfiring and causing a backlash.  EU considers the US position to be biased to commerce and commercial interests whereas, US considers the EU position to be more biased to the individual.

US data privacy is evolving

On this side of the Atlantic, the privacy tide may be rising as well.  Recently, the President has recently proposed a “Consumer Privacy Bill Of Rights” which would enshrine some of the same privacy rights present in the EU proposals. For instance, such a regime would include rights for individuals to see any and all information company’s have on them, rights to correct such information and rights to limit how much information companies collect on individuals.

This all sounds a lot closer to what the EU currently has and where they seem to want to go.

However, how this plays out in Congress and what ultimately emerges as data protection and privacy legislation is another matter. But for the moment it seems that governments on both sides of the Atlantic are pushing for more data protection not less.

Comments?

 

Dell Storage Forum 2012 – day 2

At the second day of Dell Storage Forum in Boston, they announced:

  • New FluidFS (Exanet) FS8600 front end NAS gateway for Dell Compellent storage. The new gateway can be scaled from 1 to 4 dual controller configurations and can support a single file system/name space of up to 1PB in size. The FS8600 is available with 1GbE or 10GbE options and support 8Gbps FC attachments to backend storage.
  • New Dell Compellent SC8000 controllers based on Dell’s 2U, 12th generation server hardware that can now be cooled with ambient air (115F?) and consumes lower power than previous Series 40 whitebox server controllers. Also the new hardware comes with dual 6-core processors and support 16 to 64GB of DRAM per controller or up to 128GB with dual controllers. The new controllers GA this month, support PCIe slots for backend 6Gbps SAS and frontend connectivity of 1GbE or 10GbE iSCSI, 10GbE FCoE or 8Gbps FC, with 16Gbps FC coming out in 2H2012.
  • New Dell Compellent SC200 and SC220 drive enclosures a 2U 24 SFF drive enclosure or a 2U 12LFF drive enclosure configuration supporting 6Gbps SAS connectivity.
  • New Dell Compellent SC6.0 operating software supporting a 64 bit O/S for larger memory, dual/multi-core processing.
  • New FluidFS FS7600 (1GbE)/FS7610 (10GbE) 12th generation server front end NAS gateways for Dell EqualLogic storage which supports asynchronous replication at the virtual file system level. The new gateways also support 10GbE iSCSI and can be scaled up to 507TB in a single name space.
  • New FluidFS NX3600 (1GbE) /NX3610 (10GbE) 12th generation server front end NAS gateways for PowerVault storage systems which can support up to 576TB of raw capacity for a single gateway or scale to two gateways for up to 1PB of raw storage in a single namespace/file system.
  • Appasure 5 which includes better performance based on a new backend object store to protect even larger datasets. At the moment Appasure is a Windows only solution but with block deduplication/compression and change block tracking is already WAN optimized. Dell announced Linux support will be available later this year.

Probably more interesting was talk and demoing a prototype of their RNA Networks acquisition which supports a cache coherent PCIe SSD cards in Dell servers. The new capability is still on the drawing boards but is intended to connect to Dell Compellent storage and move tier 1 out to the server. Lot’s more to come on this. They call this Project Hermes for the Greek messenger god. Not sure but something about having lightening bolts on his shoes comes to mind…

Comments?

 

20120612-114420.jpg

NSA’s huge (YBs) new data center to turn on in 2013

 

National_Security_Agency_seal
National_Security_Agency_seal

Ran across a story in Wired about the new NSA Utah data center today which is scheduled to be operational in September of 2013.

This new data center is intended to house copies of all communications intercepted the NSA.  We have talked about this data center before and how it’s going to store YB of data (See my Yottabytes by 2015?! post).

One major problem with having a YB of communications intercepts is that you need to have multiple copies of it for protection in case of human or technical error.

Apparently, NSA has a secondary data center to backup its Utah facility in San Antonio.   That’s one copy.   We also wrote another post on protecting and indexing all this data (see my Protecting the Yottabyte Archive post)

NSA data centers

The Utah facility has enough fuel onsite to power and cool the data center for 3 days.  They have a special power station to supply the 65MW of power needed.   They have two side by side raised floor halls for servers, storage and switches, each with 25K square feet of floor space. That doesn’t include another 900K square feet of technical support and office space to secure and manage the data center.

In order to help collect and temporarily storage all this information, apparently the agency has been undergoing a data center building boom, renovating and expanding their data centers throughout the states.  The article discusses some of other NSA information collection points/data centers, in Texas, Colorado, Georgia, Hawaii, Tennessee, and of course,  Maryland.

New NSA super computers

In addition to the communication intercept storage, the article also talks about a special purpose, decrypting super computer that NSA has invented over the past decade which will also be housed in the Utah data center.  The NSA seems to have created a super powerful computer that dwarfs the current best Cray XT5 super computer clusters that operate at 1.75 petaflops available today.

I suppose what with all the encrypted traffic now being generated, NSA would need some way to decrypt this information in order to understand it.  I was under the impression that they were interested in the non-encrypted communications, but I guess NSA is even more interested in any encrypted traffic.

Decrypting old data

With all this data being stored, the thought is that the data now encrypted with unbreakable AES-128, -192 or -256 encryption will eventually become decypherable.  At that time, foriegn government and other secret communications will all be readable.

By storing this secret communications now, they can scan this treasure trove for patterns that eventually occur and once found, such patterns will ultimately lead to decrypting the data.  Now we know why they need YB of storage.

So NSA will at least know what was going on in the past.  However, how soon they can move that up to do real time decryption of communications today is another question.  But knowing the past, may help in understanding what’s going on today.

~~~~

So be careful what you say today even if it’s encrypted.  Someone (NSA and its peers around the world) will probably be listening in and someday soon, will understand every word that’s been said.

Comments?

IBM’s 120PB storage system

Susitna Glacier, Alaska by NASA Goddard Photo and Video (cc) (from Flickr)
Susitna Glacier, Alaska by NASA Goddard Photo and Video (cc) (from Flickr)

Talk about big data, Technology Review reported this week that IBM is building a 120PB storage system for some unnamed customer.  Details are sketchy and I cannot seem to find any announcement of this on IBM.com.

Hardware

It appears that the system uses 200K disk drives to support the 120PB of storage.  The disk drives are packed in a new wider rack and are water cooled.  According to the news report the new wider drive trays hold more drives than current drive trays available on the market.

For instance, HP has a hot pluggable, 100 SFF (small form factor 2.5″) disk enclosure that sits in 3U of standard rack space.  200K SFF disks would take up about 154 full racks, not counting the interconnect switching that would be required.  Unclear whether water cooling would increase the density much but I suppose a wider tray with special cooling might get you more drives per floor tile.

There was no mention of interconnect, but today’s drives use either SAS or SATA.  SAS interconnects for 200K drives would require many separate SAS busses. With an SAS expander addressing 255 drives or other expanders, one would need at least 4 SAS busses but this would have ~64K drives per bus and would not perform well.  Something more like 64-128 drives per bus would have much better performer and each drive would need dual pathing, and if we use 100 drives per SAS string, that’s 2000 SAS drive strings or at least 4000 SAS busses (dual port access to the drives).

The report mentioned GPFS as the underlying software which supports three cluster types today:

  • Shared storage cluster – where GPFS front end nodes access shared storage across the backend. This is generally SAN storage system(s).  But the requirements for high density, it doesn’t seem likely that the 120PB storage system uses SAN storage in the backend.
  • Networked based cluster – here the GPFS front end nodes talk over a LAN to a cluster of NSD (network storage director?) servers which can have access to all or some of the storage. My guess is this is what will be used in the 120PB storage system
  • Shared Network based clusters – this looks just like a bunch of NSD servers but provides access across multiple NSD clusters.

Given the above, with ~100 drives per NSD server means another 1U extra per 100 drives or (given HP drive density) 4U per 100 drives for 1000 drives and 10 IO servers per 40U rack, (not counting switching).  At this density it takes ~200 racks for 120PB of raw storage and NSD nodes or 2000 NSD nodes.

Unclear how many GPFS front end nodes would be needed on top of this but even if it were 1 GPFS frontend node for every 5 NSD nodes, we are talking another 400 GPFS frontend nodes and at 1U per server, another 10 racks or so (not counting switching).

If my calculations are correct we are talking over 210 racks with switching thrown in to support the storage.  According to IBM’s discussion on the Storage challenges for petascale systems, it probably provides ~6TB/sec of data transfer which should be easy with 200K disks but may require even more SAS busses (maybe ~10K vs. the 2K discussed above).

Software

IBM GPFS is used behind the scenes in IBM’s commercial SONAS storage system but has been around as a cluster file system designed for HPC environments for over 15 years or more now.

Given this many disk drives something needs to be done about protecting against drive failure.  IBM has been talking about declustered RAID algorithms for their next generation HPC storage system which spreads the parity across more disks and as such, speeds up rebuild time at the cost of reducing effective capacity. There was no mention of effective capacity in the report but this would be a reasonable tradeoff.  A 200K drive storage system should have a drive failure every 10 hours, on average (assuming a 2 million hour MTBF).  Let’s hope they get drive rebuild time down much below that.

The system is expected to hold around a trillion files.  Not sure but even at 1024 bytes of metadata per file, this number of files would chew up ~1PB of metadata storage space.

GPFS provides ILM (information life cycle management, or data placement based on information attributes) using automated policies and supports external storage pools outside the GPFS cluster storage.  ILM within the GPFS cluster supports file placement across different tiers of storage.

All the discussion up to now revolved around homogeneous backend storage but it’s quite possible that multiple storage tiers could also be used.  For example, a high density but slower storage tier could be combined with a low density but faster storage tier to provide a more cost effective storage system.  Although, it’s unclear whether the application (real world modeling) could readily utilize this sort of storage architecture nor whether they would care about system cost.

Nonetheless, presumably an external storage pool would be a useful adjunct to any 120PB storage system for HPC applications.

Can it be done?

Let’s see, 400 GPFS nodes, 2000 NSD nodes, and 200K drives. Seems like the hardware would be readily doable (not sure why they needed watercooling but hopefully they obtained better drive density that way).

Luckily GPFS supports Infiniband which can support 10,000 nodes within a single subnet.  Thus an Infiniband interconnect between the GPFS and NSD nodes could easily support a 2400 node cluster.

The only real question is can a GPFS software system handle 2000 NSD nodes and 400 GPFS nodes with trillions of files over 120PB of raw storage.

As a comparison here are some recent examples of scale out NAS systems:

It would seem that a 20X multiplier times a current Isilon cluster or even a 10X multiple of a currently supported SONAS system would take some software effort to work together, but seems entirely within reason.

On the other hand, Yahoo supports a 4000-node Hadoop cluster and seems to work just fine.  So from a feasability perspective, a 2500 node GPFS-NSD node system seems just a walk in the park for Hadoop.

Of course, IBM Almaden is working on project to support Hadoop over GPFS which might not be optimum for real world modeling but would nonetheless support the node count being talked about here.

——

I wish there was some real technical information on the project out on the web but I could not find any. Much of this is informed conjecture based on current GPFS system and storage hardware capabilities. But hopefully, I haven’t traveled to far astray.

Comments?

 

SNIA CDMI plugfest for cloud storage and cloud data services

Plug by Samuel M. Livingston (cc) (from Flickr)
Plug by Samuel M. Livingston (cc) (from Flickr)

Was invited to the SNIA tech center to witness the CDMI (Cloud Data Managament Initiative) plugfest that was going on down in Colorado Springs.

It was somewhat subdued. I always imagine racks of servers, with people crawling all over them with logic analyzers, laptops and other electronic probing equipment.  But alas, software plugfests are generally just a bunch of people with laptops, ethernet/wifi connections all sitting around a big conference table.

The team was working to define an errata sheet for CDMI v1.0 to be completed prior to ISO submission for official standardization.

What’s CDMI?

CDMI is an interface standard for clients talking to cloud storage servers and provides a standardized way to access all such services.  With CDMI you can create a cloud storage container, define it’s attributes, and deposit and retrieve data objects within that container.  Mezeo had announced support for CDMI v1.0 a couple of weeks ago at SNW in Santa Clara.

CDMI provides for attributes to be defined at the cloud storage server, container or data object level such as: standard redundancy degree (number of mirrors, RAID protection), immediate redundancy (synchronous), infrastructure redundancy (across same storage or different storage), data dispersion (physical distance between replicas), geographical constraints (where it can be stored), retention hold (how soon it can be deleted/modified), encryption, data hashing (having the server provide a hash used to validate end-to-end data integrity), latency and throughput characteristics, sanitization level (secure erasure), RPO, and RTO.

A CDMI client is free to implement compression and/or deduplication as well as other storage efficiency characteristics on top of CDMI server characteristics.  Probably something I am missing here but seems pretty complete at first glance.

SNIA has defined a reference implementations of a CDMI v1.0 server [and I think client] which can be downloaded from their CDMI website.  [After filling out the “information on me” page, SNIA sent me an email with the download information but I could only recognize the CDMI server in the download information not the client (although it could have been there). The CDMI v1.0 specification is freely available as well.] The reference implementation can be used to test your own CDMI clients if you wish. They are JAVA based and apparently run on Linux systems but shouldn’t be too hard to run elsewhere. (one CDMI server at the plugfest was running on a Mac laptop).

Plugfest participants

There were a number people from both big and small organizations at SNIA’s plugfest.

Mark Carlson from Oracle was there and seemed to be leading the activity. He said I was free to attend but couldn’t say anything about what was and wasn’t working.  Didn’t have the heart to tell him, I couldn’t tell what was working or not from my limited time there. But everything seemed to be working just fine.

Carlson said that SNIA’s CDMI reference implementations had been downloaded 164 times with the majority of the downloads coming from China, USA, and India in that order. But he said there were people in just about every geo looking at it.  He also said this was the first annual CDMI plugfest although they had CDMI v0.8 running at other shows (i.e, SNIA SDC) before.

David Slik, from NetApp’s Vancouver Technology Center was there showing off his demo CDMI Ajax client and laptop CDMI server.  He was able to use the Ajax client to access all the CDMI capabilities of the cloud data object he was presenting and displayed the binary contents of an object.  Then he showed me the exact same data object (file) could be easily accessed by just typing in the proper URL into any browser, it turned out the binary was a GIF file.

The other thing that Slik showed me was a display of a cloud data object which was created via a “Cron job” referencing to a satellite image website and depositing the data directly into cloud storage, entirely at the server level.  Slik said that CDMI also specifies a cloud storage to cloud storage protocol which could be used to move cloud data from one cloud storage provider to another without having to retrieve the data back to the user.  Such a capability would be ideal to export user data from one cloud provider and import the data to another cloud storage provider using their high speed backbone rather than having to transmit the data to and from the user’s client.

Slik was also instrumental in the SNIA XAM interface standards for archive storage.  He said that CDMI is much more light weight than XAM, as there is no requirement for a runtime library whatsoever and only depends on HTTP standards as the underlying protocol.  From his viewpoint CDMI is almost XAM 2.0.

Gary Mazzaferro from AlloyCloud was talking like CDMI would eventually take over not just cloud storage management but also local data management as well.  He called the CDMI as a strategic standard that could potentially be implemented in OSs, hypervisors and even embedded systems to provide a standardized interface for all data management – cloud or local storage.  When I asked what happens in this future with SMI-S he said they would co-exist as independent but cooperative management schemes for local storage.

Not sure how far this goes.  I asked if he envisioned a bootable CDMI driver? He said yes, a BIOS CDMI driver is something that will come once CDMI is more widely adopted.

Other people I talked with at the plugfest consider CDMI as the new web file services protocol akin to NFS as the LAN file services protocol.  In comparison, they see Amazon S3 as similar to CIFS (SMB1 & SMB2) in that it’s a proprietary cloud storage protocol but will also be widely adopted and available.

There were a few people from startups at the plugfest, working on various client and server implementations.  Not sure they wanted to be identified nor for me to mention what they were working on. Suffice it to say the potential for CDMI is pretty hot at the moment as is cloud storage in general.

But what about cloud data consistency?

I had to ask about how the CDMI standard deals with eventual consistency – it doesn’t.  The crowd chimed in, relaxed consistency is inherent in any distributed service.  You really have three characteristics Consistency, Availability and Partitionability (CAP) for any distributed service.  You can elect to have any two of these, but must give up the third.  Sort of like the Hiesenberg uncertainty principal applied to data.

They all said that consistency is mainly a CDMI client issue outside the purview of the standard, associated with server SLAs, replication characteristics and other data attributes.   As such, CDMI does not define any specification for eventual consistency.

Although, Slik said that the standard does guarantee if you modify an object and then request a copy of it from the same location during the same internet session, that it be the one you last modified.  Seems like long odds in my experience.   Unclear how CDMI, with relaxed consistency can ever take the place of primary storage in the data center but maybe it’s not intended to.

—–

Nonetheless, what I saw was impressive, cloud storage from multiple vendors all being accessed from the same client, using the same protocols.  And if that wasn’t simple enough for you, just use your browser.

If CDMI can become popular it certainly has the potential to be the new web file system.

Comments?