Day 2 saw releases for new VMAX and VPLEX capabilities hinted at yesterday in Joe’s keynote. Namely,
VMAX now supports
Native FCoE with 10GbE support now VMAX supports directly FCoE, 10GbE iSCSI and SRDF
Enhanced Federated Live Migration supports other multi-pathing software, specifically it now adds MPIO to PowerPath and soon to come more multi-pathing solutions
Support for RSA’s external key management (RSA DPM) for their internal VMAX data security/encryption capability.
It was mentioned more than once that the latest Enginuity release 5875 is being adopted at almost 4x the rate of the prior generation code. The latest release came out earlier this year and provided a number of key enhancements to VMAX capabilities not the least of which was sub-LUN migration across up to 3 storage tiers called FAST VP.
Another item of interest was that FAST VP was driving a lot of flash sales. It seems its leading to another level of flash adoption. According to EMC they feel that almost 80-90% of customers can get by with 3% of their capacity in flash and still gain all the benefits of flash performance at significantly less cost.
VPLEX announcements included:
VPLEX Geo – a new asynchronous VPLEX cluster-to-cluster communications methodology which can have the alternate active VPLEX cluster up to 50msec latency away
VPLEX Witness – a virtual machine which provides adjudication between the two VPLEX clusters just in case the two clusters had some sort of communications breakdown. Witness can run anywhere with access to both VPLEX clusters and is intended to be outside the two fault domains where the VPLEX clusters reside.
VPLEX new hardware – using the latest Intel microprocessors,
VPLEX now supports NetApp ALUA storage – the latest generation of NetApp storage.
VPLEX now supports thin-to-thin volume migration- previously VPLEX had to re-inflate thinly provisioned volumes but with this release there is no need to re-inflate prior to migration.
The new Geo product in conjuncton with VMware and Hyper V allows for quick migration of VMs across distances that support up to 50msec of latency. There are some current limitations with respect to specific VMware VM migration types that can be supported but Microsoft Hyper-V Live Migration support is readily available at full 50msec latencies. Note, we are not talking about distance here but latency as the limiting factor to how far the VPLEX clusters can be apart.
Recall that VPLEX has three distinct use cases:
Infrastructure availability which proides fault tolerance for your storage and system infrastructure
Application and data mobility which means that applications can move from data center to data center and still access the same data/LUNs from both sites. VPLEX maintains cache and storage coherency across the two clusters automatically.
Distributed data collaboration which means that data can be shared and accessed across vast distances. I have discussed this extensively in my post on Data-at-a-Distance (DaaD) post, VPLEX surfaces at EMCWorld.
Geo is the third product version for VPLEX, from VPLEX Local that supports within data center virtualization, to Vplex Metro which supports two VPLEX clusters which are up to 10msec latency away which generally is up to metropolitan wide distances apart, and Geo which moves to asynchronous cache coherence technologies. Finally coming sometime later is VPLEX Global which eliminates the restriction of two VPLEX clusters or data centers and can support 3-way or more VPLEX clusters.
Along with Geo, EMC showed some new partnerships such as with SilverPeak, Cienna and others used to reduce bandwidth requirements and cost for their Geo asynchronous solution. Also announced and at the show were some new VPLEX partnerships with Quantum StorNext and others which addresses DaaD solutions
Other announcements today
Cloud tiering appliance – The new appliance is a renewed RainFinity solution which provides policy based migration to and from the cloud for unstructured data. Presumably the user identifies file aging criteria which can be used to trigger cloud migration for Atmos supported cloud storage. Also the new appliance can support archiving file data to the Data Domain Archiver product.
Google enterprise search connector to VNX – Showing a Google search appliance (GSA) to index VNX stored data. Thus bringing enterprise class and scaleable search capabilities for VNX storage.
A bunch of other announcements today at EMCWorld but these seemed most important to me.
EMC announced today a couple of new twists on the flash/SSD storage end of the product spectrum. Specifically,
They now support all flash/no-disk storage systems. Apparently they have been getting requests to eliminate disk storage altogether. Probably government IT but maybe some high-end enterprise customers with low-power, high performance requirements.
They are going to roll out enterprise MLC flash. It’s unclear when it will be released but it’s coming soon, different price curve, different longevity (maybe), but brings down the cost of flash by ~2X.
EMC is going to start selling server side Flash. Using storage FAST like caching algorithms to knit the storage to the server side Flash. Unclear what server Flash they will be using but it sounds a lot like a Fusion-IO type of product. How well the server cache and the storage cache talks is another matter. Chuck Hollis said EMC decided to redraw the boundary between storage and server and now there is a dotted line that spans the SAN/NAS boundary and carves out a piece of the server which is sort of on server caching.
Interesting to say the least. How well it’s tied to the rest of the FAST suite is critical. What happens when one or the other loses power, as Flash is non-volatile no data would be lost but the currency of the data for shared storage may be another question. Also having multiple servers in the environment may require cache coherence across the servers and storage participating in this data network!?
Some teaser announcements from Joe’s keynote:
VPLEX asynchronous, active active supporting two datacenter access to the same data over 1700Km away Pittsburgh to Dallas.
New Isilon record scalability and capacity the NL appliance. Can now support a 15PB file system, with trillions of files in it. One gene sequencer says a typical assay generates 500M objects/files…
Embracing Hadoop open source products so that EMC will support Hadoop distro in an appliance or software only solution
Pat G also showed EMC Greenplum appliance searching a 8B row database to find out how many products have been shipped to a specific zip code…
At EMCWorld today Pat Gelsinger had a pair of VPLEXes flanking him on stage and actively moving VMs from “Boston” to “Hopkinton” data centers. They showed a demo of moving a bunch of VMs from one to the other with all of them actively performing transaction processing. I have written about EMC’s vision in a prior blog post called Caching DaaD for Federated Data Centers.
I talked to an vSpecialist at the Blogging lounge afterwards and asked him where the data actually resided for the VMs that were moved. He said the data was synchronously replicated and actively being updated at both locations. They proceeded to long-distance teleport (Vmotion) 500 VMs from Boston to Hopkinton. After that completed, Chad Sakac powered down the ‘Boston’ VPLEX and everything in ‘Hopkinton’ continued to operate. All this was done on stage so Boston and Hopkinton data centers were possibly both located in the convention center but interesting nonetheless.
I asked the vSpecialist how they moved the IP address between the sites and he said they shared the same IP domain. I am no networking expert but I felt that moving the network addresses seemed the last problem to solve for long distance Vmotion. But, he said Cisco had solved this with their OTV (Open Transport Virtualization) for Nexus 7000 which could move IP addresses from one data center to another.
Later at the Expo, I talked with a Cisco rep who said they do this by encapsulating Layer 2 protocol messages into a Layer 3 packet. Once encapsulated it can be routed over anyone’s gear to the other site and as long as there was another Nexus 7K switch at the other site within the proper IP domain shared with the server targets for Vmotion then all works fine. Didn’t ask what happens if the primary Nexus 7K switch/site goes down but my guess is that the IP address movement would cease to work. But for active VM migration between two operational data centers it all seems to hang together. I asked Cisco if OTV was a formal standard TCP/IP protocol extension and he said he didn’t know. Which probably means that other switch vendors won’t support OTV.
There was a lot of other stuff at EMCWorld today and at the Expo.
EMC’s Content Management & Archiving group was renamed Information Intelligence.
EMC’s Backup Recovery Systems group was in force on the Expo floor with a big pavilion with Avamar, Networker and Data Domain present.
EMC keynotes were mostly about the journey to the private cloud. VPLEX seemed to be crucial to this journey as EMC sees it.
EMCWorld’s show floor was impressive. Lots of major partners were there RSA, VMware, IOmega, Atmos, VCE, Cisco, Microsoft, Brocade, Dell, CSC, STEC, Forsythe, Qlogic, Emulex and many others. Talked at length with Microsoft about SharePoint 2010. Still trying to figure that one out.