EMC World 2012 part 1 – VNX/VNXe

Plenty of announcements this morning from EMC World 2012. I’ll try to group them into different posts.  Today’s post Unified Storage Division VNX/VNXe announcements:

  • New VNXe3150 which fills out the lower end of the VNXe line and replaces VNX3100 (but will coexist for awhile). The new storage system supports 2.5″ drives, has quadcore processing, now supports SSDs as a static storage tier (no FAST yet) and has a 100 drive capacity supports 3.5″ 3TB drives and has dual 10GbE port frontend interface cards.  The new system provides 50% performance increase and capacity increase in the same rack.
  • New VNX software now supports 256 read-writeable snapshots per LUN, previous limits were 8 (I think). EMC has also improved storage pooling for both VNX and VNXe which now allows multiple types of RAID groups per pool (previously they all had to be the same RAID level) and rebalancing across RAID groups for better performance, new larger RAID 5 & 6 groups (why?).   They now offer better storage analytics with VMware vCenter which provides impressive integration with FAST VP and FAST CACHE, supplying performance and capacity information, allowing faster diagnosis and resolution of storage issues under VMware.

Stay tuned, more to come I’m sure


VMware’s CEO at EMCWorld

At EMCWorld earlier this week, Paul Maritz CEO of VMware took center stage and talked about what they were doing to help customers in their journey to the cloud. From his perspective, there seems to be 3 phases to the journey but it all starts with virtualization.

When companies start down this path they often start with virtualizing stuff they don’t have to ask to get virtualized.  Such things as file and print services are first to get virtualized and generally cap out at 20% of the servers being virtualized in the data center.  This ends phase 1.

After that, it gets harder

The trouble comes when IT is considered a tax on the rest of the business.  As such, there is little incentive from any application owner/business unit to make IT services more efficient.

Then the catalyst to further virtualization often is the failure of physical infrastructure.  IT quickly responds to such problems with virtualization as a temporary solution to getting the service back online.  But as the application owner sees the speed of response and provisioning with no concurrent loss in SLAs, applications are left on virtualized infrastructure.

Once one business unit takes the plunge with the advantages readily apparent, the rest follow.  Then the data center quickly goes from 20% to 60% virtualized.  This ends phase 2.

There wasn’t a lot if discussion on what it takes to go from 60% to 100% virtualized which he calls phase 3.  But everything VMware is rolling out new these days is to make that final transformation even easier.

VMware solutions to get data center 100% virtualized

It all starts with vSphere the management interface for a multitude of VMs that now populate the data center providing resource pooling and scheduling of virtualized machines on physical server environments.

Next comes vShield that surrounds these virtual machines with a logical security perimeter that can migrate along with the VM as it moves from server to server or from the data center to the cloud and back again.

Finally vCloud Director which provides for seamless movement of VMs from private to public cloud and back again.

As proof of all this becoming more important to the data center, Paul showed a slide where more and more of VMware’s automated services are being used in their biggest customer environments. For example, since 2008

  • the use of vMotion (VM migration) has gone from 53% to 86%
  • the use of HA (high availability) has gone from 41% to 74%
  • the use of DRS (dynamic resource scheduling) has gone from 37% to 64%
  • the use of storage vMotion (data migration) has gone from 27% to 65%
  • the use of Fault Tolerance has gone from N/A to 23%

Evidently, automation is becoming more important to many VMware customers.

Application development changes as well

But the transformation of applications will be even more significant. In the past, developers had to concern themselves with the complexity of O/S interfaces, physical hardware, networking and storage infrastructure and end-user interfaces.

But today developers have moved beyond these concerns to reduce the complexity in application development and by using technologies such as SpringSource, Ruby on Rails and other development frameworks. These frameworks are optimized for development and de-emphasize the compiling and physical optimizations needed in the past that today’s hardware no longer needs.

To this approach, now VMware adds an open source project they have been working on for some time called Cloud Foundry.  To become the new cloud O/S of the future one will now need only to code to Cloud Foundry services but then can operate anywhere on any cloud compatible infrastructure. The Foundry is released with Apache 2 open source license and is available to anyone to use.  [How does this differ from OpenStack?]

End-user computing changes

The final transformation is in the end-user computing delivery platform. As Paul said, the post-PC world has arrived.  Mobile environments are becoming more pervasive and thus, deploying computing/application services to these environments are taking higher priority from the more normal desktop deployments.

To this end, VMware is working on two projects Horizon and MVP.  Both of which are attempts to create easier end-device deployment.  One capability he discussed was a virtual smart phone that can be secured and deployed on any number of mobile devices providing a standard set of smart phone services that can be used by application developers to create one app for all smart phones [at least that’s the vision].


I had taken notes at Paul’s keynote session but held off blogging about it until now as they didn’t seem to be anything specific on EMC announcements.


EMCWorld day 3 …

Sometime this week EMC announced a new generation of Isilon NearLine storage which now includes HGST 3TB SATA disk drives.  With the new capacity the multi-node (144) Isilon cluster using the 108NL nodes can support 15PB of file data in a single file system.

Some of the booths along the walk to the solutions pavilion highlight EMC innovation winners. Two that caught my interest included:

  • Constellation computing – not quite sure how to define this but it’s distributed computing along with distributed data creation.  The intent is to move the data processing to the source of the data creation and keep the data there.  This might be very useful for applications that have many data sources and where data processing capabilities can be moved out to the nodes where the data was created. Seems highly scaleable but may depend on the ability to carve up the processing to work on the local data. I can see where compression, encryption, indexing and some statistical summarization can be done at the data creation site before it’s sent elsewhere. Sort of like both a sensor mesh with a processing nodes attached to the sensors configured as a sensor-proccessing grid.  Only one thing concerned me, there didn’t seem to be any central repository or control to this computing environment.  Probably what they intended, as the distributed solution is more adaptable and more scaleable than a centrally controlled environment.
  • Developing world healthcare cloud – seemed to be all about delivering healthcare to the bottom of the pyramid.  They won EMC’s social innovation award and are working with a group in Rwanda to try to provide better healthcare to remote villages.  It’s built around OpenMRS as a backend medical record archive hosted on EMC DC powered Iomega NAS storage and uses Google’s OpenDataKit to work with the data on mobile and laptop devices.  They showed a mobile phone which could be used to create, record and retrieve healthcare information (OpenMRS records) remotely and upload it sometime later when in range of a cell tower.  The solution also supports the download of a portion of the medical center’s health record database (e.g., a “cohort” slice, think a village’s healthcare records) onto a laptop, usable offline by a healthcare provider to update and record  patient health changes onsite and remotely.  Pulling all the technology together and delivering this as an application stack usable on mobile and laptop devices with minimal IT sophistication, storage and remote/mobile access are where the challenges lie.

Went to Sanjay’s (EMC’s CIO) keynote on EMC IT’s journey to IT-as-a-Service. As you can imagine it makes extensive use of VMware’s vSphere, vCloud, and vShield capabilities primarily in a private cloud infrastructure but they seem agnostic to a build-it or buy-it approach. EMC is about 75% virtualized today, and are starting to see significant and tangible OpEx and energy savings. They designed their North Carolina data center around the vCloud architecture and now are offering business users self service portals to provision VMs and business services…

Only caught the first section of BJ’s (President of BRS) keynote but he said recent analyst data (think IDC?) said that EMC was the overall leader (>64% market share) in purpose built backup appliances (Data Domain, Disk Library, Avamar data stores, etc.).  Too bad I had to step out but he looked like he was on a roll.


EMCWorld day 2

Day 2 saw releases for new VMAX  and VPLEX capabilities hinted at yesterday in Joe’s keynote. Namely,

VMAX announcements

VMAX now supports

  • Native FCoE with 10GbE support now VMAX supports directly FCoE, 10GbE iSCSI and SRDF
  • Enhanced Federated Live Migration supports other multi-pathing software, specifically it now adds MPIO to PowerPath and soon to come more multi-pathing solutions
  • Support for RSA’s external key management (RSA DPM) for their internal VMAX data security/encryption capability.

It was mentioned more than once that the latest Enginuity release 5875 is being adopted at almost 4x the rate of the prior generation code.  The latest release came out earlier this year and provided a number of key enhancements to VMAX capabilities not the least of which was sub-LUN migration across up to 3 storage tiers called FAST VP.

Another item of interest was that FAST VP was driving a lot of flash sales.  It seems its leading to another level of flash adoption. According to EMC they feel that almost 80-90% of customers can get by with 3% of their capacity in flash and still gain all the benefits of flash performance at significantly less cost.

VPLEX announcements

VPLEX announcements included:

  • VPLEX Geo – a new asynchronous VPLEX cluster-to-cluster communications methodology which can have the alternate active VPLEX cluster up to 50msec latency away
  • VPLEX Witness –  a virtual machine which provides adjudication between the two VPLEX clusters just in case the two clusters had some sort of communications breakdown.  Witness can run anywhere with access to both VPLEX clusters and is intended to be outside the two fault domains where the VPLEX clusters reside.
  • VPLEX new hardware – using the latest Intel microprocessors,
  • VPLEX now supports NetApp ALUA storage – the latest generation of NetApp storage.
  • VPLEX now supports thin-to-thin volume migration- previously VPLEX had to re-inflate thinly provisioned volumes but with this release there is no need to re-inflate prior to migration.


The new Geo product in conjuncton with VMware and Hyper V allows for quick migration of VMs across distances that support up to 50msec of latency.  There are some current limitations with respect to specific VMware VM migration types that can be supported but Microsoft Hyper-V Live Migration support is readily available at full 50msec latencies.  Note,  we are not talking about distance here but latency as the limiting factor to how far the VPLEX clusters can be apart.

Recall that VPLEX has three distinct use cases:

  • Infrastructure availability which proides fault tolerance for your storage and system infrastructure
  • Application and data mobility which means that applications can move from data center to data center and still access the same data/LUNs from both sites.  VPLEX maintains cache and storage coherency across the two clusters automatically.
  • Distributed data collaboration which means that data can be shared and accessed across vast distances. I have discussed this extensively in my post on Data-at-a-Distance (DaaD) post, VPLEX surfaces at EMCWorld.

Geo is the third product version for VPLEX, from VPLEX Local that supports within data center virtualization, to Vplex Metro which supports two VPLEX clusters which are up to 10msec latency away which generally is up to metropolitan wide distances apart, and Geo which moves to asynchronous cache coherence technologies. Finally coming sometime later is VPLEX Global which eliminates the restriction of two VPLEX clusters or data centers and can support 3-way or more VPLEX clusters.

Along with Geo, EMC showed some new partnerships such as with SilverPeak, Cienna and others used to reduce bandwidth requirements and cost for their Geo asynchronous solution.  Also announced and at the show were some new VPLEX partnerships with Quantum StorNext and others which addresses DaaD solutions

Other announcements today

  • Cloud tiering appliance – The new appliance is a renewed RainFinity solution which provides policy based migration to and from the cloud for unstructured data. Presumably the user identifies file aging criteria which can be used to trigger cloud migration for Atmos supported cloud storage.  Also the new appliance can support archiving file data to the Data Domain Archiver product.
  • Google enterprise search connector to VNX – Showing a Google search appliance (GSA) to index VNX stored data. Thus bringing enterprise class and scaleable search capabilities for VNX storage.

A bunch of other announcements today at EMCWorld but these seemed most important to me.


EMCWorld news Day1 1st half

EMC World keynote stage, storage, vblocks, and cloud...
EMC World keynote stage, storage, vblocks, and cloud...

EMC announced today a couple of new twists on the flash/SSD storage end of the product spectrum.  Specifically,

  • They now support all flash/no-disk storage systems. Apparently they have been getting requests to eliminate disk storage altogether. Probably government IT but maybe some high-end enterprise customers with low-power, high performance requirements.
  • They are going to roll out enterprise MLC flash.  It’s unclear when it will  be released but it’s coming soon, different price curve, different longevity (maybe), but brings down the cost of flash by ~2X.
  • EMC is going to start selling server side Flash.  Using storage FAST like caching algorithms to knit the storage to the server side Flash.  Unclear what server Flash they will be using but it sounds a lot like a Fusion-IO type of product.  How well the server cache and the storage cache talks is another matter.  Chuck Hollis said EMC decided to redraw the boundary between storage and server and now there is a dotted line that spans the SAN/NAS boundary and carves out a piece of the server which is sort of on server caching.

Interesting to say the least.  How well it’s tied to the rest of the FAST suite is critical. What happens when one or the other loses power, as Flash is non-volatile no data would be lost but the currency of the data for shared storage may be another question.  Also having multiple servers in the environment may require cache coherence across the servers and storage participating in this data network!?

Some teaser announcements from Joe’s keynote:

  • VPLEX asynchronous, active active supporting two datacenter access to the same data over 1700Km away Pittsburgh to Dallas.
  • New Isilon record scalability and capacity the NL appliance. Can now support a 15PB file system, with trillions of files in it.  One gene sequencer says a typical assay generates 500M objects/files…
  • Embracing Hadoop open source products so that EMC will support Hadoop distro in an appliance or software only solution

Pat G also showed EMC Greenplum appliance searching a 8B row database to find out how many products have been shipped to a specific zip code…



FAST, Cache & Boost – Day2@EMCWorld 2010

View from the front @EMCWorld 2010
Between sessions, view from the front @EMCWorld 2010

Well EMCWorld 2010’s a wrap for me.  Some interesting Day 2 announcements included:

  • Advanced FAST and FAST cache for CLARiiON were announced which pertain to how SSD storage is used in the subsystem. Advanced FAST provides for sub-lun storage tiering between SATA disk, performance disk, and SSD storage.  Such automated data movement between tiers is managed by policy automation.  FAST cache converts SSD storage into a cache expanded memory for the storage subsystem.  It is not quite a NetApp PAM equivalent because of it’s location at the end of internal FC link, but it’s close.
  • Unisphere for CLARiiON and Celerra was announced.  This is a combined management administration interface that replaces CLARiiON Navisphere and Celerra Manager and supplies a unified view (single pain of glass) to administer these two products.
  • Boost for Data Domain was announced.  Last month EMC’s Data Domain rolled out their Global Deduplication Appliance (GDA) which depends on Symantec’s OST API to support a shared or split deduplication process. This capability now supports all other Data Domain appliances and increases the ingestion rate for these systems. Essentially, EMC’s Boost moves some deduplication processing over to Symantec’s Media Server and by doing so reduces the media server processing and bandwidth requirements (as only unique data need be shipped to the DD appliance) and also increases ingest speed by sharing the deduplication load and reducing input data.

We discussed GDA and it’s OST split deduplication processing at length in an EMC announcement summary in last month’s newsletter   We will be also place this on our our website later this month if you’re interested in more information.

There was plenty of other announcements while I was there and I heard the technical sessions were great too.  We already discussed Day 1 activities in our post on VPLEX surfacing.  But I had to leave so I missed all of Wednesday and Thursday.

This year EMC showed many video vignettes on-stage to illustrate IT problems and some of their solutions.  In past EMCWorld’s this was done mostly by customers talking about technology problems and there was some of this as well.  However the addition of the video segments seemed to help get their point across.  How well this succeeded is anyone’s guess but I would say most of it was very entertaining.

Social media seemed even more present this year than last.  Twitter was very active and blogging was too. I read one post that summarized Mark Lewis’s keynote session that was just finished – not quite real time but close.  Video was being taken on the exhibit floor, live at the blogger’s lounge via “the Cube” and other places as well, most of which is probably still being edited for release   I found the bloggers lounge crowded but serviceable.  WiFi on the floor could have been better but this was a nit.

Overall the show was put on very well and I look forward to EMCWorld 2011 in Las Vegas next year.  See you there.

VPLEX surfaces at EMCWorld

Pat Gelsinger introducting VPLEXes on stage at EMCWorld
Pat Gelsinger introducting VPLEXes on stage at EMCWorld

At EMCWorld today Pat Gelsinger  had a pair of VPLEXes flanking him on stage and actively moving VMs from “Boston” to “Hopkinton” data centers.  They showed a demo of moving a bunch of VMs from one to the other with all of them actively performing transaction processing.  I have written about EMC’s vision in a prior blog post called Caching DaaD for Federated Data Centers.

I talked to an vSpecialist at the Blogging lounge afterwards and asked him where the data actually resided for the VMs that were moved.  He said the data was synchronously replicated and actively being updated  at both locations. They proceeded to long-distance teleport (Vmotion) 500 VMs from Boston to Hopkinton.  After that completed, Chad Sakac powered down the ‘Boston’ VPLEX and everything in ‘Hopkinton’ continued to operate.  All this was done on stage so Boston and Hopkinton data centers were possibly both located in the  convention center but interesting nonetheless.

I asked the vSpecialist how they moved the IP address between the sites and he said they shared the same IP domain.  I am no networking expert but I felt that moving the network addresses seemed the last problem to solve for long distance Vmotion.  But, he said Cisco had solved this with their OTV (Open Transport Virtualization) for  Nexus 7000 which could move IP addresses from one data center to another.

1 Engine VPLEX back view
1 Engine VPLEX back view

Later at the Expo, I talked with a Cisco rep who said they do this by encapsulating Layer 2 protocol messages into a Layer 3 packet. Once encapsulated it can be routed over anyone’s gear to the other site and as long as there was another Nexus 7K switch at the other site within the proper IP domain shared with the server targets for Vmotion then all works fine.  Didn’t ask what happens if the primary Nexus 7K switch/site goes down but my guess is that the IP address movement would cease to work. But for active VM migration between two operational data centers it all seems to hang together.  I asked Cisco if OTV was a formal standard TCP/IP protocol extension and he said he didn’t know.  Which probably means that other switch vendors won’t support OTV.

4 Engine VPLEX back view
4 Engine VPLEX back view

There was a lot of other stuff at EMCWorld today and at the Expo.

  • EMC’s Content Management & Archiving group was renamed Information Intelligence.
  • EMC’s Backup Recovery Systems group was in force on the Expo floor with a big pavilion with Avamar, Networker and Data Domain present.
  • EMC keynotes were mostly about the journey to the private cloud.  VPLEX seemed to be crucial to this journey as EMC sees it.
  • EMCWorld’s show floor was impressive. Lots of  major partners were there RSA, VMware, IOmega, Atmos, VCE, Cisco, Microsoft, Brocade, Dell, CSC, STEC, Forsythe, Qlogic, Emulex and many others.  Talked at length with Microsoft about SharePoint 2010. Still trying to figure that one out.
One table at bloggers lounge StorageNerve & BasRaayman hard at work
One table at bloggers lounge StorageNerve & BasRaayman in the foreground hard at work

I would say the bloggers lounge was pretty busy for most of the day.  Met a lot of bloggers there including StorageNerve (Devang Panchigar), BasRaaymon (Bas Raaymon), Kiwi_Si (Simon Seagrave), DeepStorage (Howard Marks), Wikibon (Dave Valente), and a whole bunch of others.

Well not sure what EMC has in store for day 2, but from my perspective it will be hard to beat day 1.

Full disclosure, I have written a white paper discussing VPLEX for EMC and work with EMC on a number of other projects as well.