OpenFlow part 2, Cisco’s response

 

organic growth by jurvetson
organic growth by jurvetson

Cisco’s CTO Padmasree Warrior, was interviewed today by NetworkWorld discussing their response to all the recent press on OpenFlow coming out of the Open Networking Summit (see my OpenFlow the next wave in networking post).  Apparently, Cisco is funding a new spin-in company to implement new networking technology congruent with Cisco’s current and future switches and routers.

Spin-in to the rescue

We have seen this act before, Andiamo was another Cisco spin-in company (brought back in ~2002), only this time focused on FC or SAN switching technology.  Andiamo was successful in that it created FC switch technology which allowed Cisco to go after the storage networking market and probably even helped them design and implement FCoE.

This time’s, a little different however. It’s in Cisco’s backyard, so to speak.  The new spin-in is called Insieme and will be focused on “OpenStack switch hardware and distributed data storage”.

Distributed data storage sounds a lot like cloud storage to me.  OpenStack seems to be an open source approach to define cloud computing systems. What all that has to do with software defined networking I am unable to understand.

Nonetheless, Cisco has invested $100M in the startup and have capped their acquisition cost at $750M if it succeeds.

But is it SDN?

Ms. Warrior does go on to discuss that software programmable switches will be integrated across Cisco’s product line sometime in the near future but says that OpenFlow and OpenStack are only two ways to do that. Other ways exist, such as adding  new features to NX-OS today or modifying their Nexus 1000v (software only, VMware based, virtual switch) they have been shipping since 2009.

As for OpenFlow commoditizing networking technology, Ms. Warrior doesn’t believe that any single technology is going to change the leadership in networking.  Programmability is certainly of interest to one segment of users with massive infrastructure but most data centers have no desire to program their own switches.  And in the end, networking success depends as much channels and goto market programs as it does on great technology.

Cisco’s CTO was reluctant to claim that Insieme was their response to SDN but it seems patently evident to the rest of us that it’s at least one of its objectives.  Something like this is a two edged sword, on the one hand it helps Cisco go after and help define the new technology on the other hand it legitimizes the current players.

~~~~

Nicira is probably rejoicing today what with all the news coming out of the Summit and the creation of Insieme.  Probably yet another reason not to label it SDN…

VPLEX surfaces at EMCWorld

Pat Gelsinger introducting VPLEXes on stage at EMCWorld
Pat Gelsinger introducting VPLEXes on stage at EMCWorld

At EMCWorld today Pat Gelsinger  had a pair of VPLEXes flanking him on stage and actively moving VMs from “Boston” to “Hopkinton” data centers.  They showed a demo of moving a bunch of VMs from one to the other with all of them actively performing transaction processing.  I have written about EMC’s vision in a prior blog post called Caching DaaD for Federated Data Centers.

I talked to an vSpecialist at the Blogging lounge afterwards and asked him where the data actually resided for the VMs that were moved.  He said the data was synchronously replicated and actively being updated  at both locations. They proceeded to long-distance teleport (Vmotion) 500 VMs from Boston to Hopkinton.  After that completed, Chad Sakac powered down the ‘Boston’ VPLEX and everything in ‘Hopkinton’ continued to operate.  All this was done on stage so Boston and Hopkinton data centers were possibly both located in the  convention center but interesting nonetheless.

I asked the vSpecialist how they moved the IP address between the sites and he said they shared the same IP domain.  I am no networking expert but I felt that moving the network addresses seemed the last problem to solve for long distance Vmotion.  But, he said Cisco had solved this with their OTV (Open Transport Virtualization) for  Nexus 7000 which could move IP addresses from one data center to another.

1 Engine VPLEX back view
1 Engine VPLEX back view

Later at the Expo, I talked with a Cisco rep who said they do this by encapsulating Layer 2 protocol messages into a Layer 3 packet. Once encapsulated it can be routed over anyone’s gear to the other site and as long as there was another Nexus 7K switch at the other site within the proper IP domain shared with the server targets for Vmotion then all works fine.  Didn’t ask what happens if the primary Nexus 7K switch/site goes down but my guess is that the IP address movement would cease to work. But for active VM migration between two operational data centers it all seems to hang together.  I asked Cisco if OTV was a formal standard TCP/IP protocol extension and he said he didn’t know.  Which probably means that other switch vendors won’t support OTV.

4 Engine VPLEX back view
4 Engine VPLEX back view

There was a lot of other stuff at EMCWorld today and at the Expo.

  • EMC’s Content Management & Archiving group was renamed Information Intelligence.
  • EMC’s Backup Recovery Systems group was in force on the Expo floor with a big pavilion with Avamar, Networker and Data Domain present.
  • EMC keynotes were mostly about the journey to the private cloud.  VPLEX seemed to be crucial to this journey as EMC sees it.
  • EMCWorld’s show floor was impressive. Lots of  major partners were there RSA, VMware, IOmega, Atmos, VCE, Cisco, Microsoft, Brocade, Dell, CSC, STEC, Forsythe, Qlogic, Emulex and many others.  Talked at length with Microsoft about SharePoint 2010. Still trying to figure that one out.
One table at bloggers lounge StorageNerve & BasRaayman hard at work
One table at bloggers lounge StorageNerve & BasRaayman in the foreground hard at work

I would say the bloggers lounge was pretty busy for most of the day.  Met a lot of bloggers there including StorageNerve (Devang Panchigar), BasRaaymon (Bas Raaymon), Kiwi_Si (Simon Seagrave), DeepStorage (Howard Marks), Wikibon (Dave Valente), and a whole bunch of others.

Well not sure what EMC has in store for day 2, but from my perspective it will be hard to beat day 1.

Full disclosure, I have written a white paper discussing VPLEX for EMC and work with EMC on a number of other projects as well.

VMworld and long distance Vmotion

Moving a VM from one data center to another

In all the blog posts/tweets about VMworld this week I didn’t see much about long distance Vmotion. At Cisco’s booth there was a presentation on how they partnered with VMware and to perform Vmotion over 200 (simulated) miles away.

I can’t recall when I first heard about this capability but for many of us this we heard about this before. However, what was new was that Cisco wasn’t the only one talking about it. I met with a company called NetEx whose product HyperIP was being used to performe long distance Vmotion at over 2000 miles apart . And had at least three sites actually running their systems doing this. Now I am sure you won’t find NetEx on VMware’s long HCL list but what they have managed to do is impressive.

As I understand it, they have an optimized appliance (also available as a virtual [VM] appliance) that terminates the TCP session (used by Vmotion) at the primary site and then transfers the data payload using their own UDP protocol over to the target appliance which re-constitutes (?) the TCP session and sends it back up the stack as if everything is local. According to the NetEx CEO Craig Gust, their product typically offers a data payload of around ~90% compared to standard TCP/IP of around 30%, which automatically gives them a 3X advantage (although he claimed a 6X speed or distance advantage, I can’t seem to follow the logic).

How all this works with vCenter, DRS and HA I can only fathom but my guess is that everything this long distance Vmotion is actually does appears to VMware as a local Vmotion. This way DRS and/or HA can control it all. How the networking is set up to support this is beyond me.

Nevertheless, all of this proves that it’s not just one highend networking company coming away with a proof of concept anymore, at least two companies exist, one of which have customers doing it today.

The Storage problem

In any event, accessing the storage at the remote site is another problem. It’s one thing to transfer server memory and state information over 10-1000 miles, it’s quite another to transfer TBs of data storage over the same distance. The Cisco team suggested some alternatives to handle the storage side of long distance Vmotion:

  • Let the storage stay in the original location. This would be supported by having the VM in the remote site access the storage across a network
  • Move the storage via long distance Storage Vmotion. The problem with this is that transferring TB of data takes (even at 90% data payload for 800 Mb/s) would take hours. And 800Mb/s networking isn’t cheap.
  • Replicate the storage via active-passive replication. Here the storage subsystem(s) concurrently replicate the data from the primary site to the secondary site
  • Replicate the storage via active-active replication where both the primary and secondary site replicate data to one another and any write to either location is replicated to the other

Now I have to admit the active-active replication where the same LUN or file system can be be being replicated in both directions and updated at both locations simultaneously seems to me unobtainium, I can be convinced otherwise. Nevertheless, the other approaches exist today and effectively deal with the issue, albeit with commensurate increases in expense.

The Networking problem

So now that we have the storage problem solved, what about the networking problem. When a VM is Vmotioned to another ESX server it retains its IP addressing so as to retain all it’s current network connections. Cisco has some techniques here where they can seem to extend the VLAN (or subnet) from the primary site to the secondary site and leave the VM with the same network IP address as at the primary site. Cisco has a couple of different ways to extend the VLAN optimized for HA, load ballancing, scaleability or protocol isolation and broadcast avoidance. (all of which is described further in their white paper on the subject). Cisco did mention that their Extending VLAN technology currently would not support distances greater than 500 miles apart.

Presumably NetEx’s product solves all this by leaving the IP addresses/TCP port at the primary site and just transferring the data to the secondary site. In any event multiple solutions to the networking problem exist as well.

Now, that long distance Vmotion can be accomplished is it a DR tool, a mobility tool, a load ballancing tool, or all of the above. That will need to wait for another post.

HDS upgrades AMS2000

Today, HDS refreshed their AMS2000 product line with a new high density drive expansion tray with 48-drives and up to a maximum capacity of 48TB, 8Gps FC (8GFC) ports for the AMS2300 and AMS2500 systems, and a new NEBS Level-3 compliant and DC powered version, the AMS2500DC.

HDS also re-iterated their stance that Dynamic Provisioning will be available on AMS2000 in the 2nd half of this year. (See my prior post on this subject for more information).

HDS also mentioned that the AMS2000 now supports external authentication infrastructure for storage managers and will support Common Criteria Certification for more stringent data security needs. The external authentication will be available in the second half of the year.

I find the DC version pretty interesting and signals a renewed interest in telecom OEM applications for this mid-range storage subsystem. Unclear to me whether this is a significant market for HDS. The 2500DC only supports 4Gps FC and is packaged with a Cisco MDS 9124 SAN switch. DC powered storage is also more energy efficient than AC storage.

Other than that the Common Criteria Certification can be a big thing for those companies or government entitities with significant interest in secure data centers. There was no specific time frame for this certification but presumably they have started the process.

As for the rest of this, it’s a pretty straightforward refresh.

EMCWorld News

At EMC World this past week, all the news was about VMware, Cisco and EMC and how they are hooking up to address the needs of the new data center user.

VMware introduced vSphere which was their latest release of their software which contained significant tweaks to improve storage performance.

Cisco was not announcing much at the show other than to say that they support vSphere with their Nexus 1000v software switch.

EMC discussed their latest V-Max hardware and PowerPath/VMware, an up and coming release of NaviSphere for VM and some other plugins that allows the VMware admin to see EMC storage views (EMC viewer).

On another note, it seems that EMC is selling out all their SSDs they can and they recently introduced their next generation SSD drive with 400GB of storage