Jan 142013
 

EMC®Enterprise Storage Division recently introduced a new VMAX® 10K, the entry point of their VMAX storage product line with new hardware and increased functionality that significantly closes the functionality gap with their  VMAX 40K and 20K storage systems.

New VMAX 10K hardware

The new VMAX 10K adds more compute hardware and faster internal busses to supply better storage performance.  The new VMAX 10K engines use 2.8Ghz Intel microprocessor cores, with up to 6-cores per director/12-cores per engine.  Also, the new VMAX 10K comes with Gen 3 PCIe internal busses and the next generation RapidIO® fabric interconnect which add substantial backend throughput. The VMAX 10K can be ordered with from 2 to 4-engines and up to 512GB of cache.

In addition, the VMAX 10K now offers 2.5” form factor drives that come in a 25-drive/2U enclosure and can be intermixed with 3.5” drives for configuration flexibility.  The small form factor drives take 1/3rd the power/cooling and weigh 1/3rd as much as the large form factor drives.  The new VMAX 10K configuration now starts with as few as 24 drives and can scale up to 1560 SFF drives.  The VMAX 10K continues to offer a maximum of up to 1.5PB of usable storage capacity.

Finally, the new VMAX 10K hardware can now be installed in customer racks. That is the new VMAX 10K hardware (engines and drive enclosures) can be ordered in EMC supplied racks or as rack mountable hardware for installation in approved customer-supplied racks.

New VMAX 10K functionality

EMC now offers just about all the functionality (except mainframe and I Series attach) on the higher end 40K and 20K systems with their entry-level VMAX 10K.  New functionality includes:

  • Data at Rest Encryption (D@RE) – with D@RE and encryption enabled VMAX 10K engines, data security is provided using a storage system implemented AES-256 encryption algorithm with external RSA Key Manager or an embedded key manager.
  • Federated Tiered Storage (FTS) – with FTS, the new VMAX 10K supports external storage as well as internal storage and offers the use of external storage as any tier in FAST VP storage tiering.
  • 3- and 4-way SRDF®with 3- and 4-way SRDF topologies, the new VMAX 10K can fully participate with other VMAX storage systems to protect against disasters using within region and out-of-region replication services.
  • Host I/O Limits – with I/O Limits, VMAX can be configured on a storage pool or port group basis to limit the amount of IOP/s or MB/sec that can be used to better support multi-tenant environments.
  • Data compaction – with data compression for idle data, VMAX 10K can store double the data in the same storage capacity with minimal performance impact.
  • Unisphere for VMAX – with the latest version of Unisphere, VMAX 10K can now be fully managed from Unisphere or Symmetrix® Management Console.
  • T10 Protection Information standard – with support for Oracle’s Data Integrity Extension (DIX) and DIX compliant O/S software, HBAs, the new VMAX 10K offers host-to-storage and back data integrity validation.  

In addition to all the great new functionality, VMAX 10K offers much more Symmetrix core functionality that includes Virtual Pools, FAST VP, TimeFinder®, Federated Live Migration and base SRDF functionality.  The VMAX 10K also supplies full VMware vSphere 5.1 support, which includes vCenter plug-ins together with VAAI, VASA and SRM support for increased VMAX 10K interoperability.  Moreover, the VMAX 10K and EMC host software supports tight integration with Microsoft Windows 2012, Hyper-V and Microsoft SharePoint environments.

New VMAX 10K performance

Also, the new VMAX 10K increases IO performance substantially.  It now provides 100% more online transaction processing (OLTP) I/O operations per second than the previous generation. Also with all the new hardware, the new VMAX 10K now provides 30% more backend bandwidth for data intensive applications.  Given all this, Oracle on VMware with virtualized OLTP-like applications should execute up to 90% faster on the new VMAX 10K storage.

Significance

Enterprise storage is heating up, again. Early last year HDS introduced their HUS VM entry-level enterprise storage and late last year IBM introduced their new DS8870.  Also, in Las Vegas, last year EMC introduced an entirely new VMAX storage product line with the 40K, 20K and 10K.  With this most recent release of another new VMAX 10K, EMC has once again revised their entry-level enterprise system storage.

There seems to be a significant market for entry-level enterprise storage and the latest vendor actions seem to signal continued customer interest here. Indeed, EMC stated that the previous generation 10K system had the fastest ramp of any VMAX storage in history and was responsible for a significant number of new storage customers to EMC.   The new VMAX 10K should do even better.

~~~~

[This storage announcement dispatch was originally sent out to our newsletter subscribers in January of 2013.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports.]

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

 

Apr 142010
 

New EMC Data Domain Global Deduplication Array and other enhancements

EMC recently announced updates to their DD880 appliance, appliance software and a new dual controller deduplication storage system called the Global Deduplication Array.

Global Deduplication Array (GDA)

EMC’s GDA pairs two DD880 appliances that offer twice the throughput and capacity of the newly enhanced DD880 appliance.  This product currently comes only with Symantec’s OST support for NetBackup and Backup Exec.  Later this year, EMC will offer similar capabilities for their NetWorker backup product.  Note that this system does not support NAS or VTL configurations as it requires special software at the backup server.

The GDA uses a new version of EMC’s OST plugin that moves some of the deduplication processing to the backup server.  As such, this functionality  improves backup server throughput while at the same time improving GDA data ingestion.  Backup server throughput is improved by copying less data to the GDA backup target. The new OST plugin pre-digests the data for deduplication and the new process looks like:

  • OST plugin starts by breaking the backup data into super-chunks (~1MB); hashes the super chunks; and then sends this list over to the GDA.
  • GDA takes the content list, identifies the new or unique super-chunk data and returns to the OST a list of only the new super-chunks to be sent across.
  • OST plugin then sends only the new super-chunk data across to the proper GDA controller.

This is not quite deduplication at the backup server because the hard work of data lookup is done at the GDA.  Once the super-chunk data is transmitted to a GDA DD880 controller, the appliance breaks it up into deduplication chunks (~8KB), identifies which is unique and duplicate, and saves the unique data.

The new OST plugin also supports new application level load-balancing across Ethernet links to improve link throughput.  By doing this at the application level, it no longer depends on Ethernet link trunking, which was impossible to use with different Ethernet hardware and had other configuration limitations.

As the GDA has multiple controllers supporting a single backup stream, performance can now scale-out easier than before.  The GDA currently supports one or two controller configurations.  With two controllers, the deduplication processing and storage is split to provide load-balanced operations.  The two controllers in a GDA use a share-nothing type of clustering.  Such capabilities easily lend themselves to increasing beyond dual-controller configurations to support true multi-controller performance.

The GDA supports up to 2-DD880 appliances with a maximum of 285TB raw capacity, and a ~12TB/hour ingestion rate using up to 270 simultaneous backup streams.

Other Enhancements

EMC also announced new hardware and software functionality enhancements to their Data Domain appliances. Specifically,

  • Doubled DD880 capacity – supports up to 12 shelves of drives doubling raw capacity to 142.5TB and logical capacity to more than 7PBs of backup data
  • Data encryption – supports software encryption for the data store for all Data Domain appliances, which encrypts the data after deduplication and compression as a final step before it’s stored on disk.
  • One to Many replication – supports replicating data from one Data Domain appliance to multiple appliances which when combined with the current many to one and cascaded replication just increases the configurations which can be served
  • Low bandwidth replication – supports a more compute intensive but bandwidth saving replication option for remote sites with limited bandwidth.

Data encryption is a separately licensable, software only option that can be configured to support stronger or weaker security and uses a single key for a Data Domain appliance.  Encryption and replication interoperate to encrypt replicated data with the key of the replication target so that data is automatically recoverable at the target site with the target key.  There were no performance numbers provided for encrypted data throughput but by being able to select which encryption algorithm to use, one can tradeoff throughput for less or more security.

New replication options provide new topologies for the enterprise customer, e.g., for a Many to One to Many replication where multiple remote offices are replicated to a central hub, which with cascaded replication can then be mirrored to multiple DR sites. EMC said that replication is a very popular option and as such, adding more configuration flexibility and bandwidth features should make this option even more attractive.

Low bandwidth replication is for remote office environments in many locations that lack high bandwidth network connectivity. One example cited was for oil platforms in very remote areas around the globe.

Announcement significance

EMC’s Data Domain team continues to advance their technology and feature set.  The GDA seems to be a way to increase performance using mostly software changes. But it has the potential to create a whole new multi-controller configuration of products that could be used to dramatically improve performance by adding more hardware to a single configuration.

A PDF version of this can be found at

EMC 2010 April 14 Announcement of Data Domain GDA and other enhancements

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

Jan 272009
 

Announcing Brocade DCX-4S

This Silverton Consulting (SCI) Storage Intelligence (StorInt™) Dispatch provides a summary of Brocade’s recent announcement of their DCX-4S and other items.

DCX-4S

Brocade is enjoying significant success with their 8Gb fibre channel DCX Backbone, first released in January 2008 and proving to be its fastest ramping modular switch in Brocade’s history. Their newest addition to this product family is the DCX-4S, which is a four horizontal slot version of the bigger DCX model, but with full backbone-class performance, energy efficiency, and advanced functionality. The DCX-4S also offers high-speed Inter Chassis Links (ICLs) which can cross link a DCX-4S to its bigger brother, the DCX or to another DCX-4S.  This can be used to provide more scalable and flexible backbone configurations at the network core and edge.

The new DCX-4S can provide half the number of ports (192 FC/FICON ports) and switch throughput (1.536Tb/s) of the larger 384-port DCX model.  Each DCX-4S slot can support a Fibre Channel blade with from 16 to 48 8GFC ports, or an application blade such as the Fabric Encryption blade or the Brocade Application blade for EMC RecoverPoint, which are also supported by the DCX model.  The DCX-4S can connect to Brocade B- and M-Series SAN fabrics without disruption and with common management. And its multiprotocol architecture is designed to support emerging Converged Enhanced Ethernet (CEE) and Fibre Channel over Ethernet (FCoE) protocols through the addition of a future blade.

Fabric Operating System Virtual Fabrics

Brocade is also introducing their Virtual Fabric support in its Fabric Operating System (FOS) that provides for partitioning a physical switch into multiple logical switches.  Virtual Fabrics are very useful for isolating SAN traffic via independent logical switches managed as separate entities.  Some use cases for Virtual Fabrics include for mainframes, isolating FICON from FC traffic, for SAN consolidation, retaining segregated management, and for multi-fabric environments, growth can be managed from a pool of physical ports.

Physical ports can be dynamically allocated to a logical switch offering flexible scalability.  Brocade also mentioned that support for Virtual Fabrics was provided in their 8G FC ASIC and as such, any switch with this ASIC could support Virtual Fabrics. Today, the DCX, DCX-4S, 5300 and 5100 support the feature.  Older products and/or other switches not using the feature can interoperate with Virtual Fabrics by connecting to a single Virtual Fabric.

Fabric level encryption

As for encryption, Brocade also announced support for the HP Secure Key Management appliance and support for IPv6.  Also announced was support for NetVault/BakBone 8.1 and HP Data Protector 6.0 backup applications.  More interesting perhaps, Brocade also announced support for encrypting tape with compression.

HBA announcements

Brocade is announcing support for quality of service (QoS) configurations and SAN boot auto-configuration capabilities for their HBA product line.  Brocade’s HBA is now qualified by EMC, HDS, LSI Corp., and Xiotech and can now be sold by most of them.

HBA port/NPIV-id QoS can be specified to be High, Medium, or Low and be maintained throughout a QoS enabled Brocade switching fabric.  As such, VMs using NPIV-ids can have their port QoS move with their VM as it is Vmotioned throughout a VMware data center.   Such QoS support can limit the cross VM/system performance impact of logical or physical link problems.

SAN boot auto-configuration supports a switch defined automatic configuration of an HBA port at boot time.  Historically, this was maintained in HBA non-volatile memory and as such was somewhat hard to change and error prone. With this new capability, the HBA port boot characteristics are defined at the fabric level and are downloaded whenever a boot request is issued to the HBA.  Current OSs supported by SAN boot auto-configuration includes Windows, Linux, and VMware servers.

Announcement significance

The DCX-4S takes Brocade’s latest switch technology to the mid-market and in the process makes for a much more flexible fabric configuration.  More interesting is their relative success and high adoption rate of the DCX series of products.  Unclear if this reflects the adoption of 8GFC in the enterprise, the relative need for more switch ports/bandwidth, or some need for DCX advanced features.  Most likely a combination of all the above are driving adoption.  How this will play out in the mid market is TBD.

As for the HBA business, one reason Brocade cited for getting into the HBA business was advancing HBA features would require tighter integration with the switching fabric.  However, SCI feels this is more a statement of a lack of standards than a real constraint.

Finally data at rest encryption for tape and disk is now available everywhere, i.e. from the host/server, standalone appliance, fabric switch, storage subsystem or disk/tape device.

A PDF version of this can be found at

Brocade 2009 January 27 Announcement on new DCX-4S switch

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

Sep 222008
 

This Silverton Consulting (SCI) Storage Intelligence (StorInt™) Dispatch provides a summary of Brocade’s recent introduction of their fabric level encryption (FS8-18) blade and 2U standalone switch encryption system.

Fabric level encryption

Brocade is the first to introduce a fabric encryption blade and switch made especially for securing disk “data-at-rest”.  Both the new blade and switch support from 48Gb/s to 96Gb/s of encryption throughput. The FS8-18 blade supports 16-8GFC ports and up to four blades can be supported in one DCX backbone.  The switch supports 32-8GFC ports.  Both devices have a smart card reader for key initialization and two gigE ports for key synchronization and management.

Historically, key management was a crucial missing link to the use of encryption but nowadays key management systems are available from many IT vendors.  Consequently, Brocade’s new encryption offerings utilize standard key management systems supplied by other vendors and currently support NetApp’s Lifetime Key Manager (LKM) and EMC’s RSA key manager (RKM) with HP’s Secure Key Manager (SKM) to follow in a subsequent release.

Data-at-rest encryption protects data as it is stored.  Currently, Brocade’s fabric encryption only supports disk data-at-rest but a follow-on release will support tape data as well. This is in contrast to Cisco’s SME which only supports tape media or virtual tape device encryption.  Later this year Brocade will support five backup products for tape and VTL operations – IBM TSM 5.4, HP Data Protector, EMC Networker 7.3, Symantec NetBackup 6.5, and CommVault Galaxy 7.0.

Other encryption solutions to the disk data-at-rest are already available from such vendors as Seagate for disk drives, EMC for storage subsystems and NetApp/Decru for network appliances.  Predating all this was tape encryption from most major tape vendors.

Disk data-at-rest encryption provides a significant addition to tape encryption.  Although tape encryption protects media that is often taken offsite and out of data center control, less well understood is that service people can easily walk away with a good working disk and that disk data is not protected unless encrypted.  Also, once data is encrypted, anyone listening to the fabric past the encryption will be unable to read the data.

How it works

Specific LUNs are configured to be encrypted and once configured a fabric encryptor starts a background task, which reads each LUN block, encrypts the data, and writes it back.  This background activity uses block maps to indicate whether a LUN block is encrypted or not.  When a write happens during this process, the data is automatically encrypted and the block flagged as encrypted in the block map.  Also, each LUN has a unique key.

Fabric level encryption concerns

Deduplication and data compression appliances will necessarily reduce data redundancy.  Encrypting data prior to these devices will negate their effectiveness. Always place any encryption behind the deduplication or data compression appliance in the data path

Disk mirroring is another problem area.  Fabric level encryption will encrypt data at the primary site, a subsystem will then mirror this data to a secondary site.  To use such data effectively will require another encryptor at the secondary site with access to a duplicate or original set of keys used at the primary site for encrypting the LUN.

Also, thin provisioned storage subsystems depend on initialized blocks to never be written until needed.  If fabric level encryption is required to rewrite pre-existing blocks into ciphertext, all blocks in a thinly provisioned volume would immediately be written and thereby use up storage space.

Finally, while it is good to have data flowing around your SAN in encrypted form, protection only exists once the data is past the encryptor.  Data from the HBA to the SAN and from the SAN entry point to the encryptor blade/switch is still in plaintext and thus vulnerable.  As such, follow-on products are needed to support HBA encryption, an area that Brocade intends to address in the future.

Announcement significance

Many security risks exist inside a data center.  Tape encryption eliminated one risk to media outside the data center.  Disk data-at-rest deals with the other remaining hole for data outside the data center but also starts to address securing the data within the data center as well.  There are some issues with using this new fabric encryption but for many it’s a significant step in the right direction.

A PDF version of this can be found at

Brocade 2008 September 22 Announcement on new fabric level encryption

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

Apr 082008
 

EMC announced new encryption features for PowerPath and Connectrix that uses RSA Key Manager for the Datacenter encryption key management.  PowerPath supplies encryption support for disks and Connectrix supplies encryption for tape and disk library applications.

PowerPath Encryption

PowerPath, EMC’s family of host-based applications for multipathing and data migration now supports AES 128- and 256-bit encryption for disk data.  Host based encryption like PowerPath protects data end-to-end as its being transmitted from the host to the storage and back again.

Currently PowerPath encryption supports EMC Symmetrix and CLARiiON storage arrays, and support is planned for other vendor’s storage as part of a phased release.  The encryption feature is available on Windows and Solaris versions of PowerPath starting in May, and support for additional operating systems will follow.  PowerPath encryption secures host data at the volume level, and does not change the size of the data on disk, i.e., it does not compress data, which provides for continued byte level addressing to encrypted data.

Deployment of EMC’s PowerPath host-based encryption will have very little storage performance (I/O throughput) impact, provided host CPU utilization is kept at reasonable levels prior to encryption.  The recommended CPU idle time threshold is 15%.  Encryption is performed at the kernel level both for performance and security reasons.

RSA Key Manager for the Datacenter creates and secures encryption keys and associates an object identifier with a host volume being encrypted.  See below for more on RSA Key Manager.

Data backed up using PowerPath encryption is read from storage and decrypted before being fed to the backup software, which creates two considerations:

  • Any data backed up is in the clear and therefore, unsecured.
  • “LAN-free” backup products would need PowerPath encryption services on their servers plus access to RSA key management in order to backup encrypted data.

The data needs to be in the clear and unencrypted to be able to be restored incrementally or restored to a different LUN.

Connectrix Encryption

For tape and other data streaming applications like disk libraries, EMC has released Connectrix support for Cisco Storage Media Encryption (SME).  Connectrix will support encrypting data streams for backup/restore purposes.  EMC recommends RSA Key Manager for the Datacenter to manage keys used for encryption/decryption.

The original version only supports Cisco SME but Brocade SME will come out in a future release.  Connectrix encryption has only been qualified to support EMC Networker and Veritas NetBackup backup software packages.  Future versions will be released to support IBM TSM and CommVault.

RSA Key Manager for the Datacenter

RSA’s centralized key management offering is provided in a pre-imaged appliance form factor, configured to support database failover by trusted Oracle technology.  The appliance is configured in redundant pairs for production environments to eliminate single points of failure.  The RSA Key Manager Server appliance supports clustered operations for high availability, and the key database can be remotely replicated for additional protection, unattended restart, and disaster recovery of all encryption keys.  The appliance supports other cryptographic systems in addition to PowerPath and Connectrix.

Announcement significance

Data security never goes out of style.  With this announcement EMC has placed a strong product in the host-based encryption space and future rollouts for their Brocade partner will complete their network encryption product line.  The only thing missing is subsystem and device level encryption but we are certain that EMC is actively looking at these two solutions as well.

A PDF version of this is available at

EMC 2008 April 08 Announces new encryption options for PowerPath and Connectrix

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

Oct 152007
 

This Silverton Consulting (SCI) Storage Intelligence (StorInt™) Dispatch provides a summary of Seagate, LSI, and IBM’s collaboration in new options for securing disk data on LSI mega-RAID and control units based storage.

Seagate’s strategy

Just about once a month we see news of another data security breach releasing 1000s to millions of credit card and/or social security numbers.  Up til now these have been mostly due to laptop or tape cartridge loss.  However, data security is not limited to laptops and tape.  All that data residing on tapes originated on data center disk storage.

Consequently, one problem is all disk drives are ultimately removed from data center protection for re-sale, re-furbishing, or servicing.  Today, once a drive leaves the protection of a data center, data on that disk can be read at will.  Tomorrow, if the data is on an encrypted drive with protection enabled, data cannot be read at all.

Other options

There are many points where data being stored could be encrypted – at the host, in the network/appliance, at the controller, or at the drive.  There are advantages and disadvantages to each of them:

  • Host encryption burns up host based MIPs and encrypted data cannot be compressed, or de-duplicated.  Indexing and searching only work if the data is indexed/searched from hosts who have access to the encryption keys/software.  Host encryption protects not only data-at-rest but also data-in-flight.  Also, to support DR requirements the host software and the encryption keys would need to be available at the secondary site.  Having lots of cipher text available to anyone listening allows certain cryptographic attacks which can be used to crack the keys being used.
  • Network/appliance encryption burns network/appliance cycles and data being stored cannot be compressed or de-duplicated past the network/appliance doing the encryption.  Searching and indexing may be ok if the data is accessed over network/appliance hardware that has access to the encryption keys/software.  Network/appliance encryption provides limited data-in-flight protection (from the point the data is encrypted) but it does provide full data-at-rest security.  Also, for DR you would need a copy of the network/appliance hardware plus access to the encryption keys.  These products also allow cryptographic attacks used to crack keys because both the plain-text and the cipher text are available
  • Control unit encryption burns control unit cycle and depending on where encryption is done this may or may not impact data compression and de-duplication.  Searching and indexing are not impacted.  Control-unit encryption has no protection of data-in-flight but does protect data-at-rest.  Also, for DR purposes you may or may not require copies of the control-unit and the encryption keys at the remote site.   One issue is when drives are shared across control units the cryptographic keys would also need to be shared.  Also the cipher text is available once a drive is removed from the system and can once again be used in cryptographic attacks to guess keys
  • Drive based encryption burns drive cycles, has no impact on controller data compression or de-duplication.  Searching and indexing are not impacted.  Similarly drive encryption has no protection for data-in-flight but does offer protection of data-at-rest.  Finally for DR purposes there is no intrinsic requirement for keys or encryption at the remote site (but see below for LSI constraints).  Finally as cipher text is not externally available this option is relatively immune from cryptographic attacks

A key question is what threats are you answering with encryption.

  • One threat is having clear data outside the data center’s protected arena.  To answer this threat you must protect data-at-rest and any of the above ways of encrypting data will suffice
  • Another threat is having clear data accessible within your data center to un-authorized users/applications.  To answer this threat you must protect data-in-flight and you must use host or network/appliance based encryption.

How drive encryption works

Encryption hardware is available on the drive and encryption firmware and keys are added in a manufacturing feature personalization step.  At the first power up the drive is able to read and write without authentication but it is always encrypting data.  A control-unit can use a special authentication key to lock a drive down so that it requires an authentication key to provide clear data back to the subsystem.  From that point on every power up – the drive does not respond to normal I/O commands until it receives proper authentication.  If authentication fails enough times the drive destroys its encryption keys and renders all old encrypted data un-decipherable.  This also happens when the drive is told to do a secure erasure.   Drive encryption keys are kept on the drive in all zones and on all heads so any individual defect will not cause loss of encryption keys.

LSI controllers support cryptographic authentication in one of two ways:

  • With IBM’s enterprise key manager supplying authentication keys
  • Internally with the LSI controller supplying authentication keys

Either approach is secure and provides appropriate mechanisms for the drive to insure its talking to the right control unit.  If the drive needs to be moved to another control unit it’s authentication key can be provided to the other control unit.  Drive data recovery can also be accomplished with the proper authentication key.  LSI and IBM EKM drive authentication keys can be backed up securely using yet another encryption key together with a pass phrase.

One advantage of drive encryption is that the drive supports a very quick and secure data erase by securely destroying its encryption keys.  The encryption is AES-128 and can be changed in the future without impacting system architecture.  Also, encrypted data is never available outside the drive, which makes many potential cryptographic attacks virtually impossible.

Problems overcome

Obviously just having encrypted drives in your environment does not protect data-at-rest.  One must enable the drive authentication and whenever you replicate the data the data you must also use encrypted drives.  LSI has a concept of a “security domain” and a “secure volume group”.  A secure volume group requires all drives in a RAID group to support drive encryption and to have encryption enabled.  A security domain is having one or more secure volume groups for data to be secured.  LSI constrains local and remote copies to be within a security domain, i.e., encrypted drive data cannot be snapped, cloned, or remotely replicated to non-encrypted drives.

What’s the risk

Historically, data backup was the highest security risk.  Getting a backup copy of data was relatively easy and once obtained all the data was in the clear.  IBM, HP, Sun, and others have begun to encrypt tape data, which has quickly moved to close this exposure.

Consequently, disk drives are the next choice for a weak point.  Security conscious government agencies have typically not allowed disk drives to leave their premises – voiding device warranties.  Some security conscious commercial entities have shredded or crushed disk drives in the past after having done hour or daylong security erasure passes.

Accordingly, enterprise class drives have a MTBF around 1.3 million hours and low-end drives have a MTBF of 600,000 hours or more.  With 8,766 hours/year this seems like a long time to wait for a drive to fail.  But it’s not unusual to have an enterprise data center with a PB of data or 1000s of disk drives. For 2000 disks you would lose one enterprise disk every 650 hours, ~1 drive every month and you would lose one low-end disk every couple of weeks.   Most shops have a mixture of enterprise and low-end disks so it’s not unusual to see a disk leave the data center every month.

However, what’s actually on a drive leaving the shop and how easy it is to extract sensitive data off the drive is subject to some debate?  Its not unusual for over 50% of the drives returned to a factory to have no defect found on them – meaning that one out of every two replaced drives read and write flawlessly.  These (non-encrypting) drives could easily be plugged into any compatible interface and all the data could be read out, indexed and searched.

In contrast, reading drive data and knowing what that data represents is somewhat difficult.  Most enterprise disk storage is configured in RAID groups, where one drive is but one of 4 or more drives in a group.  Which RAID block addresses map to which drives in the RAID group is non-trivial, and knowing which RAID group blocks represent which LUN blocks is also non-trivial.  Also, both of these are vendor specific and may depend on features enabled.  Not easy to crack but not that hard for someone familiar with vendor internals.

Fortunately for hackers (and unfortunately for data centers) all this may not be necessary as once one can access the data one could easily scan for likely text strings such as “Payroll”, “SSN”, “Credit Card Number”, etc.  This sort of “brute force” approach works well if you have the time and (un-)fortunately once a drive is outside the protection of the data center a hacker has the time.

If this is so easy why don’t we see it on the news already?  It’s fairly easy for a data center to identify data on a lost backup tape or laptop but it’s much more difficult to know what data was on a drive leaving their shop.   Also legislation requires disclosure for lost data tapes and laptops but have yet to catch up to require disclosure for lost disk drives.  Furthermore, re-sellers, re-furbishers, and/or service organizations are not currently required to report on missing disk drives.  Someday soon legislation will catch-up and require such disclosure and tracking and then this all will be commonplace.

Announcement significance

It won’t take long for drive and storage vendors to standardize on drive encryption to protect data-at-rest – for the cost of cryptographic authentication and encrypting drives this problem can be solved.  Most large storage vendors already have key management and the rest can quickly coming up to speed. Look for other drive vendors and storage subsystem suppliers to quickly come onboard.

Security is a never-ending journey.  Once you close one loophole another pops up quickly.  Drive encryption is a straightforward, inexpensive and secure defense against drives falling into the wrong hands outside the data center.  However, the next logical threat involves unauthorized access to clear data within the data center – to close this loophole protecting data-in-flight is needed.  Infrastructure put in place to support drive encryption should be easily extensible to this as well – stay tuned.

PDF version of this can be found at

Seagate 2007 October 15 announcing new secure disk storage

Jul 242007
 

This Silverton Consulting (SCI) Storage Intelligence (StorInt™) Dispatch provides a summary of Cisco’s recent SAN OS3.2 announcement that introduces new fabric functionality Storage Media Encryption (SME), Data Mobility Manager (DMM), and N-Port Virtualizer (NPV) as well as new hardware the MDS 9222i a new version of their 9216i multi-service modular switch, the MDS 18/4-Port multiservice module, and the MDS 9134 multi-layer fabric switch a new version of the MDS 9124.

Fabric functionality franchise

In an affront to all the security and storage virtualization appliances Cisco announces support for encryption and data migration as part of the fabric.  The debate on where data migration functionality should reside (fabric, separate appliance, or storage subsystem) has been going on for over 2 years now.  Cisco is adding fuel and encryption to this fire.

In this debate, to Cisco’s credit, fabric based migration and encryption can easily handle heterogeneous storage subsystems, can make better use of your current investment in fabric hardware, may provide a more redundant solution than a single point product can provide, and Cisco is not the first to offer fabric level functionality.

To the oppositions benefit, non-fabric solutions have historically touted heterogeneous storage support, Cisco did not announce pricing so the TCO of the various solutions is hard to analyze, the current products are available in redundant, high availability configurations, the current products have been out much longer, and finally, the current products support much more functionality.

Nonetheless, a case can be made that some of the functionality provided by current products is overkill, the install base of Cisco switches is much larger than most of the appliance solutions, and as the migration option appears to be software, pricing can be one of Cisco’s competitive advantages.  So, where this debate ends up is mostly a function of market adoption and its not too late to have another option on the table

As far as where the functionality should really reside there is no clear winner.  Yes additional hardware for the other solutions costs additional money but there are other solutions in the storage subsystem and the fabric that can negate this advantage.  Regarding performance impacts of fabric vs. appliance vs. storage subsystem the only thing that can be said is that performance of the overall storage network must be retained and any of these solutions can adhere to that.  TCO issues are mostly driven by the market and the aspirations of the vendors touting their solutions so is hard to nail down.

Storage Media Encryption (SME)

Cisco is introducing SAN based encryption to provide for heterogeneous disk and tape device encryption.  It can be configured to encrypt all traffic from one VSAN to another VSAN thus allowing it to select a single port to all the ports in a VSAN.  SME comes with the MDS 9222i and MDS 18/4.  MDS 18/4 can be installed in MDS 9126i/A, MDS 9506, MDS 9509, or MDS 9513 and can provide encryption to any port in the director.  Each MDS 9222i or MDS 18/4 can sustain encryption throughput of up to 10Gbps/s.

There are a number of point products for tape drive encryption on the market.  A few encryption appliances that can encrypt all traffic in or out.  But this is the first encryption capability in the fabric itself. .

Full 10G encryption should suffice for today’s tape drives, but by doing encryption in the fabric we negate the tape drive compression and the VTL de-duplication functionality that is emerging.   Nonetheless, SME can provide significant advantage to customers not using these features.

Data Mobility Manager (DMM)

Cisco is introducing DMM throughout their SAN-OS family of products.  Essentially they offer a LUN to LUN migration capability with QoS constrained operations.  Migration can take place while the source LUN is accessed/updated and the target LUN will be synched up with the source before migration completes.  Also they support Secure Erase of the source data.  Multiple LUNs can also be migrated concurrently.  Migration is used for technology change out, workload balancing, and storage consolidation.  Migration can be done between unequal sized LUNs as long as the target is bigger than the source.  Migration can be done across fabrics.  Finally migration only moves data up to the switch and back down not using host connections or server cycles.

N-Port Virtualization (NPV)

In order to simplify management of the proliferation of blade servers Cisco offers NPV.  This allows all the ports from a blade switch to be managed as a single port.  This helps simplify blade switch deployment by reducing the number of Domain Ids, minimizing interoperability issues with Core SAN switches, and minimizing Server and SAN admin coordination.  Cisco also intends to broaden NPV to support port channel and other features over time.

MDS 9222i Multiservice Modular Switch & MDS 18/4-Port Multiservice Module

As discussed above SME is being rolled out with MDS 9222i and MDS 18/4-Port hardware.  The MDS 18/4-Port module is a blade version of the MDS 9222i.  This hardware supports FC, FICON, FCIP, and iSCSI with 18X4Gbps ports and 4X1Gbps Ethernet ports with SAN routing on each port.  This is the first 4Gbps support for this equipment and also includes HW based IPsec&FCIP compression as well as FCIP Disk and Tape I/O acceleration support

MDS 9134 Multilayer Fabric Switch

Cisco has introduced another version of their fabric switch the MDS 9134, suitable for standalone applications as well as enterprise core-edge scenarios.  2-MDS 9134s can be stacked to support up to 64-4Gbps ports.  The MDS 9134 has 2-10Gbps FC ports for inter –switch links.  Also it comes with On-demand ports with 24 base 4Gbps ports with an 8-port extension and the 2-10G ports can also be licensed.

Announcement significance

Cisco is throwing down the gauntlet by offering services in the fabric.  It makes sense if you are a fabric vendor to move functionality to the fabric but there is much debate as to where the functionality should truly reside and the market is the place to answer this question definitively.

As for the hardware announcements, the hardware encryption engine combined with the multiservice modules is a good idea and one that can be expanded over time.  The other hardware announcements are consistent with the march of technology over time.

A PDF version of this can be found at:

Cisco 2007 July 24 SAN OS3.2 announcement