EMCWorld day 2

Day 2 saw releases for new VMAX  and VPLEX capabilities hinted at yesterday in Joe’s keynote. Namely,

VMAX announcements

VMAX now supports

  • Native FCoE with 10GbE support now VMAX supports directly FCoE, 10GbE iSCSI and SRDF
  • Enhanced Federated Live Migration supports other multi-pathing software, specifically it now adds MPIO to PowerPath and soon to come more multi-pathing solutions
  • Support for RSA’s external key management (RSA DPM) for their internal VMAX data security/encryption capability.

It was mentioned more than once that the latest Enginuity release 5875 is being adopted at almost 4x the rate of the prior generation code.  The latest release came out earlier this year and provided a number of key enhancements to VMAX capabilities not the least of which was sub-LUN migration across up to 3 storage tiers called FAST VP.

Another item of interest was that FAST VP was driving a lot of flash sales.  It seems its leading to another level of flash adoption. According to EMC they feel that almost 80-90% of customers can get by with 3% of their capacity in flash and still gain all the benefits of flash performance at significantly less cost.

VPLEX announcements

VPLEX announcements included:

  • VPLEX Geo – a new asynchronous VPLEX cluster-to-cluster communications methodology which can have the alternate active VPLEX cluster up to 50msec latency away
  • VPLEX Witness –  a virtual machine which provides adjudication between the two VPLEX clusters just in case the two clusters had some sort of communications breakdown.  Witness can run anywhere with access to both VPLEX clusters and is intended to be outside the two fault domains where the VPLEX clusters reside.
  • VPLEX new hardware – using the latest Intel microprocessors,
  • VPLEX now supports NetApp ALUA storage – the latest generation of NetApp storage.
  • VPLEX now supports thin-to-thin volume migration- previously VPLEX had to re-inflate thinly provisioned volumes but with this release there is no need to re-inflate prior to migration.

VPLEX Geo

The new Geo product in conjuncton with VMware and Hyper V allows for quick migration of VMs across distances that support up to 50msec of latency.  There are some current limitations with respect to specific VMware VM migration types that can be supported but Microsoft Hyper-V Live Migration support is readily available at full 50msec latencies.  Note,  we are not talking about distance here but latency as the limiting factor to how far the VPLEX clusters can be apart.

Recall that VPLEX has three distinct use cases:

  • Infrastructure availability which proides fault tolerance for your storage and system infrastructure
  • Application and data mobility which means that applications can move from data center to data center and still access the same data/LUNs from both sites.  VPLEX maintains cache and storage coherency across the two clusters automatically.
  • Distributed data collaboration which means that data can be shared and accessed across vast distances. I have discussed this extensively in my post on Data-at-a-Distance (DaaD) post, VPLEX surfaces at EMCWorld.

Geo is the third product version for VPLEX, from VPLEX Local that supports within data center virtualization, to Vplex Metro which supports two VPLEX clusters which are up to 10msec latency away which generally is up to metropolitan wide distances apart, and Geo which moves to asynchronous cache coherence technologies. Finally coming sometime later is VPLEX Global which eliminates the restriction of two VPLEX clusters or data centers and can support 3-way or more VPLEX clusters.

Along with Geo, EMC showed some new partnerships such as with SilverPeak, Cienna and others used to reduce bandwidth requirements and cost for their Geo asynchronous solution.  Also announced and at the show were some new VPLEX partnerships with Quantum StorNext and others which addresses DaaD solutions

Other announcements today

  • Cloud tiering appliance – The new appliance is a renewed RainFinity solution which provides policy based migration to and from the cloud for unstructured data. Presumably the user identifies file aging criteria which can be used to trigger cloud migration for Atmos supported cloud storage.  Also the new appliance can support archiving file data to the Data Domain Archiver product.
  • Google enterprise search connector to VNX – Showing a Google search appliance (GSA) to index VNX stored data. Thus bringing enterprise class and scaleable search capabilities for VNX storage.

A bunch of other announcements today at EMCWorld but these seemed most important to me.

Comments?

Hitachi’s VSP vs. VMAX

Today’s announcement of Hitachi’s VSP brings another round to the competition between EMC and Hitachi/HDS in the enterprise. VSP’s recent introduction which is GA and orderable today, takes the rivalry to a whole new level.

I was on SiliconANGLEs live TV feed earlier today discussing the merits of the two architectures with David Floyer and Dave Vellante from Wikibon. In essence, there seems to be a religious war going on between the two.

Examining VMAX, it’s obviously built around a concept of standalone nodes which all have cache, frontend, backend and processing components built in. Scaling the VMAX, aside from storage and perhaps cache, involves adding more VMAX nodes to the system. VMAX nodes talk to one another via an external switching fabric (RapidIO currently). The hardware although sophisticated packaging, IO connection technology and other internal stuff looks very much like a 2U server one could purchase from any number of vendors.

On the other hand, Hitachi’s VSP is a special built storage engine (or storage computer as Hu Yoshida says). While the architecture is not a radical revision of USP-V, it’s a major upleveling of all component technology from the 5th generation cross bar switch, the new ASIC driven Front-end and Back-end directors, the shared control L2 cache memory and the use of quad core Xenon Intel processors. Much of this hardware is unique, sophistication abounds and looks very much like a blade system for the storage controller community.

The VSP and VMAX comparison is sort of like a open source vs. closed source discussion. VMAX plays the role of open source champion that largely depends on commodity hardware, sophisticated packaging but with minimal ASICs technology. As evidence of the commodity hardware VPLEX EMC’s storage virtualization engine reportedly runs on VMAX hardware. Commodity hardware lets EMC ride the technology curve as it advances for other applications.

Hitachi VSP plays the role of closed source champion. Its functionality is locked inside proprietary hardware architecture, ASICS and interfaces. The functionality it provides is tightly coupled with their internal architecture and Hitachi probably believes that by doing so they can provide better performance and more tightly integrated functionality to the enterprise.

Perhaps this doesn’t do justice to either development team. There is plenty of unique proprietary hardware and sophisticated packaging in VMAX but they have taken the approach of separate but equal nodes. Whereas Hitachi has distributed this functionality out to various components like Front-end directors (FEDs), backend directors (BEDs), cache adaptors (CAs) and virtual storage directors (VSDs), each of which can scale independently, i.e., doesn’t require more BEDs to add FEDs or CAs. Ditto for VSDs. Each can be scaled separately up to the maximum that can fit inside a controller chasis and then if needed, you can add a whole another controller chasis.

One has an internal switching infrastructure (the VSP cross bar switch) and the other uses external switching infrastructure (the VMAX RapidIO). The promise of external switching like commodity hardware, is that you can share the R&D funding to enhance this technology with other users. But the disadvantage is that architecturally you may have more latency to propagate an IO to other nodes for handling.

With VSP’s cross bar switch, you may still need to move IO activity between VSDs but this can be done much faster and any VSD can access any CA, BED, FED resource required to perform the IO so the need to move IO is reduced considerably. Thus, providing a global pool of resources that any IO can take advantage of.

In the end, blade systems like VSP or separate server systems like VMAX, can all work their magic. Both systems have their place today and in the foreseeable future. Where blades servers shine is in dense packaging, high power cooling efficiency and bringing a lot of horse power to a small package. On the other hand, server systems are simple to deploy and connect together with minimal limitations on the number of servers that can be brought together.

In a small space blade systems probably can bring more compute (storage IO) power to bear within the same volume than multiple server systems but the hardware is much more proprietary and costs lots of R&D $s to maintain leading edge capabilities.

Typed this out after the show, hopefully I characterized the two products properly. If I am missing anything please let me know.

[Edited for readability, grammar and numerous misspellings – last time I do this on an iPhone. Thanks to Jay Livens (@SEPATONJay) and others for catching my errors.]

SSD shipments start to take off

3 rack V-Max storage subsystem from EMC
3 rack V-Max storage subsystem from EMC

Was on an analyst call today where Bob Wambach of EMC was discussing their recent success with V-Max their newest version of their highly successful Symmetrix storage subsystem. But what was more interesting was their announcement of having sold 1PB of enterprise flash storage on Symmetrix and almost 2PB total across all EMC product lines in 1H09. Symmetrix SSD shipments includes both DMX and V-Max installs. During 1H09, EMC shipped both 146GB and 400GB SSDs, so it’s hard to put a number of drives on this capacity but at 146GBs 1PB of Symmetrix SSD this would be around 6.9K SSDs and for all SSDs a maximum of ~14K drives.

SSD drive shipments vs. hard drives

To put this in comparison, ~540M hard drive were shipped in 2008 and with a ~7% decline in 2009 this should equate to around 502M drive shipments in 2009. But this includes all drives and as such, if 15-20% of these were data center storage, then ~75 to ~100M data center hard drives will be shipped in ’09. Looking at just the first half, probably close to 40% of the whole year, then ~30-40M data center hard drives were shipped across the industry in 1H09. In Q2’09 EMC had a 22.4% revenue storage market share, using this market share for all of 1H09, this means they probably shipped ~7.8M data center hard drives during 1H09 (assuming revenue correlates with drive shipments). Hence, 14K SSDs represents a very small but growing proportion (<0.2%) of all drives sold by EMC.

Of course this is just the start

On the analyst call today EMC provided a couple of examples of recent SSD installations. In one example a customer was looking at a US$3M mainframe upgrade but instead went with a $500K SSD upgrade. EMC was able to examine their current storage, identify their hottest, most active LUNs and convert these to SSDs. Once this was done, EMC was able to solve the customers performance problems which allowed them to defer the mainframe upgrade.

Data center access patterns

Some statistics from an EMC year long, data center study analyzing detail IO traces from around 600 data centers, show that over 60% of data center data is infrequently accessed. EMC believes such data can best be left on high capacity SATA drives. As for the rest, it wouldn’t surprise me if 15-20% is accessed frequently enough to reside on SSD/flash drives for improved performance and the remaining 25-20% probably best be served today left on FC drives.

Nowadays, EMC goes through a manual analysis to identify which data to place on SSDs but in the near future their FAST (Fully Automated Storage Tiering) software will be able to migrate data to the right performing storage tier automatically. With FAST in place, supposedly one only needs to upgrade to SSDs and turn FAST on, after that it will analyze your data reference patterns and automatically move your performance critical data to SSDs.

The coming SSD world

So, SSDs are starting to be adopted, by organizations both large and small. Perhaps current SSD drive shipments are insignificant compared to hard drives, but given today’s realities of data use there seems no reason that SSD adoption can’t accelerate and someday claim 10% or more of all data center drive shipments. Hence, at todays numbers, this means almost 10 million SSDs being shipped each year.

EMCWorld News

At EMC World this past week, all the news was about VMware, Cisco and EMC and how they are hooking up to address the needs of the new data center user.

VMware introduced vSphere which was their latest release of their software which contained significant tweaks to improve storage performance.

Cisco was not announcing much at the show other than to say that they support vSphere with their Nexus 1000v software switch.

EMC discussed their latest V-Max hardware and PowerPath/VMware, an up and coming release of NaviSphere for VM and some other plugins that allows the VMware admin to see EMC storage views (EMC viewer).

On another note, it seems that EMC is selling out all their SSDs they can and they recently introduced their next generation SSD drive with 400GB of storage