Why Bus-Tech, why now – Mainframe/System z data growth

Z10 by Roberto Berlim (cc) (from Flickr)
Z10 by Roberto Berlim (cc) (from Flickr)

Yesterday, EMC announced the purchase of Bus-Tech, their partner in mainframe or System z attachment for the Disk Library Mainframe (DLm) product line.

The success of open systems mainframe attach products based on Bus-Tech or competitive technology is subject to some debate but it’s the only inexpensive way to bring such functionality into mainframes.  The other, more expensive approach is to build in System z attach directly into the hardware/software for the storage system.

Most mainframer’s know that FC and FICON (System z storage interface) utilize the same underlying transport technology.  However, FICON has a few crucial differences when it comes to data integrity, device commands and other nuances which make easy interoperability more of a challenge.

But all that just talks about the underlying hardware when you factor in disk layout (CKD), tape format, disk and tape commands (CCWs), System z interoperability can become quite an undertaking.

Bus-Tech’s virtual tape library maps mainframe tape/tape library commands and FICON protocols into standard FC and tape SCSI command sets. This way one could theoretically attach anybody’s open system tape or virtual tape system onto System z.  Looking at Bus-Tech’s partner list, there were quite a few organizations including Hitachi, NetApp, HP and others aside from EMC using them to do so.

Surprise – Mainframe data growth

Why is there such high interest in mainframes? Mainframe data is big and growing, in some markets almost at open systems/distributed systems growth rates.  I always thought mainframes made better use of data storage, had better utilization, and controlled data growth better.  However, this can only delay growth, it can’t stop it.

Although I have no hard numbers to back up my mainframe data market or growth rates, I do have anecdotal evidence.  I was talking with an admin at one big financial firm a while back and he casually mentioned they had 1.5PB of mainframe data storage under management!  I didn’t think this was possible – he replied not only was this possible, he was certain they weren’t the largest in their vertical/East coast area by any means .

Ok so mainframe data is big and needs lot’s of storage but this also means that mainframe backup needs storage as well.

Surprise 2 – dedupe works great on mainframes

Which brings us back to EMC DLm and their deduplication option.  Recently, EMC announced a deduplication storage target for disk library data used as an alternative to their previous CLARiion target.  This just happens to be a Data Domain 880 appliance behind a DLm engine.

Another surprise, data deduplication works great for mainframe backup data.  It turns out that z/OS users have been doing incremental and full backups for decades.  Obviously, anytime some system uses full backups, dedupe technology can reduce storage requirements substantially.

I talked recently with Tom Meehan at Innovation Data Processing, creators of FDR, one of only two remaining mainframe backup packages (the other being IBM DFSMShsm).  He re-iterated that deduplication works just fine on mainframes assuming you can separate the meta-data from actual backup data.

System z and distributed systems

In the mean time, this past July, IBM recently announced the zBX, System z Blade eXtension hardware system which incorporates Power7 blade servers running AIX into and under System z management and control.  As such, the zBX brings some of the reliability and availability of System z to the AIX open systems environment.

IBM had already supported Linux on System z but that was just a software port.  With zBX, System z could now support open systems hardware as well.  Where this goes from here is anybody’s guess but it’s not a far stretch to talk about running x86 servers under System z’s umbrella.

—-

So there you have it, Bus-Tech is the front-end of EMC DLm system.  As such, it made logical sense if EMC was going to focus more resources in the mainframe dedupe market space to lock up Bus-Tech, a critical technology partner.  Also, given market valuations these days, perhaps the opportunity was too good to pass up.

However, this now leaves Luminex as the last standing independent vendor to provide mainframe attach for open systems.  Luminex and EMC Data Domain already have a “meet-in-the-channel” model to sell low-end deduplication appliances to the mainframe market.  But with the Bus-Tech acquisition we see this slowly moving away and current non-EMC Bus-Tech partners migrating to Luminex or abandoning the mainframe attach market altogether.

[I almost spun up a whole section on CCWs, CKD and other mainframe I/O oddities but it would have detracted from this post’s main topic.  Perhaps, another post will cover mainframe IO oddities, stay tuned.]

Protecting the Yottabyte archive

blinkenlights by habi (cc) (from flickr)
blinkenlights by habi (cc) (from flickr)

In a previous post I discussed what it would take to store 1YB of data in 2015 for the National Security Agency (NSA). Due to length, that post did not discuss many other aspects of the 1YB archive such as ingest, index, data protection, etc. Thus, I will attempt to cover each of these in turn and as such, this post will cover some of the data protection aspects of the 1YB archive and its catalog/index.

RAID protecting 1YB of data

Protecting the 1YB archive will require some sort of parity protection. RAID data protection could certainly be used and may need to be extended to removable media (RAID for tape), but that would require somewhere in the neighborhood of 10-20% additional storage (RAID5 across 10 to 5 tape drives). It’s possible with Reed-Solomon encoding and using RAID6 that we could take this down to 5-10% of additional storage (RAID 6 for a 40 to a 20 wide tape drive stripe). Possibly other forms of ECC (such as turbo codes) might be usable in a RAID like configuration which would give even better reliability with less additional storage.

But RAID like protection also applies to the data catalog and indexes required to access the 1YB archive of data. Ditto for the online data itself while it’s being ingested, indexed, or readback. For the remainder of this post I ignore the RAID overhead but suffice it to say with today’s an additional 10% storage for parity will not change this discussion much.

Also in the original post I envisioned a multi-tier storage hierarchy but the lowest tier always held a copy of any files residing in the upper tiers. This would provide some RAID1 like redundancy for any online data. This might be pretty usefull, i.e., if a file is of high interest, it could have been accessed recently and therefore resides in upper storage tiers. As such, multiple copies of interesting files could exist.

Catalog and indexes backups for 1YB archive

IMHO, RAID or other parity protection is different than data backup. Data backup is generally used as a last line of defense for hardware failure, software failure or user error (deleting the wrong data). It’s certainly possible that the lowest tier data is stored on some sort of WORM (write once read many times) media meaning it cannot be overwritten, eliminating one class of user error.

But this presumes the catalog is available and the media is locatable. Which means the catalog has to be preserved/protected from user error, HW and SW failures. I wrote about whether cloud storage needs backup in a prior post and feel strongly that the 1YB archive would also require backups as well.

In general, backup today is done by copying the data to some other storage and keeping that storage offsite from the original data center. At this amount of data, most likely the 2.1×10**21 of catalog (see original post) and index data would be copied to some form of removable media. The catalog is most important as the other two indexes could potentially be rebuilt from the catalog and original data. Assuming we are unwilling to reindex the data, with LTO-6 tape cartridges, the catalog and index backups would take 1.3×10**9 LTO-6 cartridges (at 1.6×10**12 bytes/cartridge).

To back up this amount of data once per month would take a gaggle of tape drives. There are ~2.6×10**6 seconds/month and each LTO-6 drive can transfer 5.4×10**8 bytes/sec or 1.4X10**15 bytes/drive-month but we need to backup 2.1×10**21 bytes of data so we need ~1.5×10**6 tape transports. Now tapes do not operate 100% of the time because when a cartridge becomes full it has to be changed out with an empty one, but this amounts to a rounding error at these numbers.

To figure out the tape robotics needed to service 1.5×10**6 transports we could use the latest T-finity tape library just announced by Spectra Logic . The T-Finity supports 500 tape drives and 122,000 tape cartridges, so we would need 3.0×10**3 libraries to handle the drive workload and about 1.1×10**4 libraries to store the cartridge set required, so 11,000 T-finity libraries would suffice. Presumably, using LTO-7 these numbers could be cut in half ~5,500 libraries, ~7.5×10**5 transports, and 6.6×10**8 cartridges.

Other removable media exist, most notably the Prostor RDX. However RDX roadmap info out to the next generation are not readily available and high-end robotics are do not currently support RDX. So for the moment tape seems the only viable removable backup for the catalog and index for the 1YB archive.

Mirroring the data

Another approach to protecting the data is to mirror the catalog and index data. This involves taking the data and copying it to another online storage repository. This doubles the storage required (to 4.2×10**21 bytes of storage). Replication doesn’t easily protect from user error but is an option worthy of consideration.

Networking infrastructure needed

Whether mirroring or backing up to tape, moving this amount of data will require substantial networking infrastructure. If we assume that in 2105 we have 32GFC (32 gb/sec fibre channel interfaces). Each interface could potentially transfer 3.2GB/s or 3.2×10**9 bytes/sec. Mirroring or backing up 2.1×10**21 bytes over one month will take ~2.5×10**6 32GFC interfaces. Probably should have twice this amount of networking just to not have any one be a bottleneck so 5×10**6 32GFC interfaces should work.

As for switches, the current Brocade DCX supports 768 8GFC ports and presumably similar port counts will be available in 2015 to support 32GFC. In addition if we assume at least 2 ports per link, we will need ~6,500 fully populated DCX switches. This doesn’t account for multi-layer switches and other sophisticated switch topologies but could be accommodated with another factor of 2 or ~13,000 switches.

Hot backups require journals

This all assumes we can do catalog and index backups once per month and take the whole month to do them. Now storage today normally has to be taken offline (via snapshot or some other mechanism) to be backed up in a consistent state. While it’s not impossible to backup data that is concurrently being updated it is more difficult. In this case, one needs to maintain a journal file of the updates going on while the data is being backed up and be able to apply the journaled changes to the data backed up.

For the moment I am not going to determine the storage requirements for the journal file required to cover the catalog transactions for a month, but this is dependent on the change rate of the catalog data. So it will necessarily be a function of the index or ingest rate of the 1YB archive to be covered in a future post.

Stay tuned, I am just having too much fun to stop.