Primary storage compression can work

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

Since IBM’s announced their intent to purchase StorWize there has been much discussion on whether primary storage data compression can be made to work.  As far as I know StorWize only offered primary storage compression for file data but there is nothing that prohibits doing something similar for block storage as long as you have some control over how blocks are laid down on disk.

Although secondary block  data compression has been around for years in enterprise tape and more recently with some deduplication appliances, primary storage compression pre-dates secondary storage compression.  STK delivered primary storage data compression with Iceberg in the early 90’s but it wasn’t until a couple of years later that they introduced compression on tape.

In both primary and secondary storage, data compression works to reduce the space needed to store data.  Of course, not all data compresses well, most notably image data (as it’s already compressed) but compression ratios of 2:1 were common for primary storage of that time and are normal for today’s secondary storage.  I see no reason why such ratios couldn’t be achieved for current primary storage block data.

Implementing primary block storage data compression

There is significant interest in implementing deduplication for primary storage as NetApp has done but supporting data compression is not much harder.  I believe much of the effort to deduplicate primary storage lies in creating a method to address partial blocks out of order, which I would call data block virtual addressing which requires some sort of storage pool.  The remaining effort to deduplicate data involves implementing the chosen (dedupe) algorithm, indexing/hashing, and other administrative activities.  These later activities aren’t readily transferable to data compression but the virtual addressing and space pooling should be usable by data compression.

Furthermore, block storage thin provisioning requires some sort of virtual addressing as does automated storage tiering.  So in my view, once you have implemented some of these advanced capabilities, implementing data compression is not that big a deal.

The one question that remains is does one implement compression with hardware or software (see Better storage through hardware for more). Considering that most deduplication is done via software today it seems that data compression in software should be doable.  The compression phase could run in the background sometime after the data has been stored.  Real time decompression using software might take some work, but would cost considerably less than any hardware solution.  Although the intensive bit fiddling required to perform data compression/decompression may argue for some sort of hardware assist.

Data compression complements deduplication

The problem with deduplication is that it needs duplicate data.  This is why it works so well for secondary storage (backing up the same data over and over) and for VDI/VMware primary storage (with duplicated O/S data).

But data compression is an orthogonal or complementary technique which uses the inherent redundancy in information to reduce storage requirements.  For instance, something like LZ compression takes advantage of the fact that in text some letters occur more often than others (see letter frequency). For instance, in English, ‘e’, ‘t’, ‘a’, ‘o’, ‘i’, and ‘n, represent over 50% of the characters in most text documents.  By using shorter bit combinations to encode these letters one can reduce the bit-length of any (English) text string substantially.  Another example is run length encoding which takes any repeated character and substitutes a trigger character, the character itself, and a count of the number of repetitions for the repeated string.

Moreover, the nice thing about data compression is that all these techniques can be readily combined to generate even better compression rates.  And of course compression could be applied after deduplication to reduce storage footprint even more.

Why would any vendor compress data?

For a couple of reasons:

  • Compression not only reduces storage footprint but with hardware assist it can also increase storage throughput. For example, if 10GB of data compresses down to 5GB, it should take ~1/2 the time to read.
  • Compression reduces the time it would take time to clone, mirror or replicate.
  • Compression increases the amount of data that could be stored which should incentivise them to pay more for your storage.

In contrast, with data compression vendors might may sell less storage.  But the advantages of enterprise storage is in the advanced functionality/features and higher reliability/availability/performance that are available.  I see data compression as just another advantages to enterprise class storage and as a feature, the user could enable or disable it and see how well it works for there data.

What do you think?

Describing Dedupe

Hard Disk 4 by Alpha six (cc) (from flickr)
Hard Disk 4 by Alpha six (cc) (from flickr)

Deduplication is a mechanism to reduce the amount of data stored on disk for backup, archive or even primary storage.  For any storage, data is often duplicated and any system that eliminates storing duplicate data will be more utilize storage more efficiently.

Essentially, deduplication systems identify duplicate data and only store one copy of such data.  It uses pointers to incorporate the duplicate data at the right point in the data stream. Such services can be provided at the source, at the target, or even at the storage subsystem/NAS system level.

The easiest way to understand deduplication is to view a data stream as a book and as such, it can consist of two parts a table of contents and actual chapters of text (or data).  The stream’s table of contents provides chapter titles but more importantly (to us), identifies a page number for the chapter.  A deduplicated data stream looks like a book where chapters can be duplicated within the same book or even across books, and the table of contents can point to any book’s chapter when duplicated. A deduplication service inputs the data stream, searches for duplicate chapters and deletes them, and updates the table of contents accordingly.

There’s more to this of course.  For example, chapters or duplicate data segments must be tagged with how often they are duplicated  so that such data is not lost when modified.  Also, one way to determine if data is duplicated is to take one or more hashes and compare this to other data hashes, but to work quickly, data hashes must be kept in a searchable index.

Types of deduplication

  • Source deduplication involves a repository, a client application, and an operation which copies client data to the repository.  Client software chunks the data, hashes the data chunks, and sends these hashes over to the repository.  On the receiving end, the repository determines which hashes are duplicates and then tells the client to send only the unique data.  The repository stores the unique data chunks and the data stream’s table of contents.
  • Target deduplication involves performing deduplication inline, in-parallel, or post-processing by chunking the data stream as it’s recieved, hashing the chunks, determining which chunks are unique, and storing only the unique data.  Inline refers to doing such processing while receiving data at the target system, before the data is stored on disk.  In-parallel refers to doing a portion of this processing while receiving data, i.e., portions of the data stream will be deduplicated while other portions are being received.  Post-processing refers to data that is completely staged to disk before being deduplicated later.
  • Storage subsystem/NAS system deduplication looks a lot like post-processing, target deduplication.  For NAS systems, deduplicaiot looks at a file of data after it is closed. For general storage subsystems the process looks at blocks of data after they are written.  Whether either system detects duplicate data below these levels is implementation dependent.

Deduplication overhead

Deduplication processes generate most overhead while deduplicating the data stream, essentially during or after the data is written, which is the reason that target deduplication has so many options, some optimize ingestion while others optimize storage use. There is very little additonal overhead for re-constituting (or un-deduplicating) the data for read back as retrieving the unique and/or duplicated data segments can be done quickly.  There may be some minor performance loss because of lack of  sequentiality but that only impacts data throughput and not that much.

Where dedupe makes sense

Deduplication was first implemented for backup data streams.  Because any backup that takes full backups on a monthly or even weekly basis will duplicate lot’s of data.  For example, if one takes a full backup of 100TBs every week and lets say new unique data created each week is ~15%, then at week 0, 100TB of data is stored both for the deduplicated and undeduplicated data versions; at week 1 it takes 115TB to store the deduplicated data but 200TB for the non-deduplicated data; at week 2 it takes ~132TB to store deduplicated data but 300TB for the non-deduplicated data, etc.  As each full backup completes it takes another 100TB of un-deduplicated storage but significantly less deduplicated storage.  After 8 full backups the un-deduplicated storage would require 8ooTB but only ~265TB for deduplicated storage.

Deduplication can also work for secondary or even primary storage.  Most IT shops with 1000’s of users, duplicate lot’s of data.  For example, interim files are sent from one employee to another for review, reports are sent out en-mass to teams, emails are blasted to all employees, etc.  Consequently, any storage (sub)system that can deduplicate data would more efficiently utilize backend storage.

Full disclosure, I have worked for many deduplication vendors in the past.

Quantum OEMs esXpress VM Backup SW

Quantum announced today that they are OEMing esXpress software (from PHD Virtual) to better support VMware VM backups (see press release) . This software schedules VMware snapshots of VMs and can then transfer the VM snapshot (backup) data directly to a Quantum DXI storage device.

One free “Professional” esXpress license will ship with each DXI appliance which allows for up to 4-esXpress virtual backup appliance (VBA) virtual machines to run in a single VMware physical server. An “Enterprise” license can be purchased for $1850 which allows for up to 16-esXpress VBA virtual machines to run on a single VMware physical server. More Professional licenses can be purchased for $950 each. The free Professional license also comes with free installation services from Quantum.

Additional esXpress VBAs can be used to support more backup data throughput from a single physical server. The VBA backup activity is a scheduled process and as such, when completed the VBA can be “powered” down to save VMware server resources. Also as VBAs are just VMs they fully support VMware Vmotion, DRS, and HA capabilities that are available from VMware. However using any of these facilities to move a VBA to another physical server may require additional licensing.

The esXpress software eliminates the need for a separate VCB (VMware Consolidated Backup) proxy server and provides a direct interface to support Quantum DXI deduplicated storage for VM backups. This should simplify backup processing for VMware VMs using DXI archive storage.

Quantum also announced today a new key manager, the Scalar Key Manager for Quantum LTO tape encryption which has an integrated GUI with Quantum’s tape automation products. This allows a tape automation manager a single user interface to support tape automation and tape security/encryption. A single point of management should simplify the use of Quantum LTO tape encryption.

Data Domain bidding war

It’s unclear to me what EMC would want with Data Domain (DD) other than to lockup deduplication technology across the enterprise. EMC has Avamar for Source dedupe, has DL for target dedupe, has Celerra Dedupe and the only one’s missing are V-Max, Symm & Clariion dedupe.

My guess is that EMC sees Data Domain’s market share as the primary target. It doesn’t take a lot of imagination to figure that once Data Domain is a part of EMC, EMC’s Disk Library (DL) offerings will move over to DD technology. Which probably leaves FalconStor/Quantum technology used in DL today as outsiders.

EMC’s $100M loan to Quantum last month probably was just insurance to keep a business partner afloat until something better came along or they could make it on their own. The DD deal would leave Quantum parntership supporting EMC with just Quantum’s tape offerings.

Quantum deduplication technology doesn’t have nearly the market share that DD has in the enterprise but they have won a number of OEM deals not the least of which is EMC and they were looking to expand. But if EMC buys DD, this OEM agreement will end soon.

I wonder if DD is worth $1.8B in cash what could Sepaton be worth. They seem to be the only pure play dedupe appliance left standing out there.

Not sure whether NetApp will up their bid but they always seem to enjoy competing with EMC. Also unclear how much of this bid is EMC wanting DD or EMC just wanting to hurt NetApp, either way DD stockholders win out in the end.