The promise of software defined storage

Data hypervisor, software defined storage, data plane, control plane
(c) 2012 Silverton Consulting, Inc. All rights reserved

Not sure why but all the hype around software defined storage seems to be reaching a crescendo.  Possible due to conference season coming up but it started earlier this year.  I attended an SNW analyst session that was talking about software defined storage had on its panel technical people from HDS, IBM, Data Core and VMware.  It seems the distinction between storage virtualization and software defined storage is getting slimmer every time we talk about it.  I have written before about software defined storage (see my Data Hypervisor post).

Server, networking and storage virtualization today

Server virtualization makes an awful lot of sense, has made lots of money and arguably been around for decades now especially in mainframe systems.  Servers have so much power today that dedicating one to a single workload just doesn’t make any sense anymore.

Network virtualization from OpenFlow and others also makes a lot of sense (see OpenFlow the next wave in networking and OpenFlow part 2, Cisco’s response posts). Here we aren’t necessarily boosting network utilization as much as changing resource allocation to deal with altered traffic flows.  That and the fact that provisioning, monitoring and other management characteristics can now be under pragmatic control from the user makes these systems very appealing. Especially, to organizations that exhibit varying network activity over time.

Storage virtualization has been around for a long time too and essentially places a storage system abstraction layer on top of a group of other, heterogeneous storage systems. This provides a number of capabilities such as allowing data to be migrated from one storage system to another without host knowledge or intervention.  Other storage virtualization features include, centralized, management, common storage features, different storage personalities (protocols), etc. But just being able to migrate data from one storage system to another without host intervention or knowledge provides an awful lot of value, especially to large data centers which refresh technology frequently.

Software defined storage compared to server virtualization

Software defined storage seems to imply some ability to marry storage virtualization services to RESTful and other APIs which would allow programatic storage provisioning, monitoring and management.  This would allow data centers to manage and control their storage without involving storage administrators in day-to-day activities.

When I compare this to server virtualization the above described capabilities really don’t increase storage utilization much.  Yes, by automating provisioning or even running thin provisioning one can potentially boost storage capacity utilization but you really haven’t increased the IO utilization much by doing this.

Looking under the covers of most storage systems one might find that CPU cores are pretty idle, but data paths and storage devices are typically running flat out.  One problem is that today’s enterprise storage subsystems are already highly shared across applications and users.  So there is really no barrier to sharing these resources as widely as they can.   As such, storage system IOPS and/or bandwidth utilization is already pretty high.   I would say a typical enterprise application environment storage subsystem performance usually runs above 30% and reaching 50% or more during peak time periods. Increasing IOPS utilization much beyond that risks seriously impacting peak performance periods.

Now if somehow one could migrate slower data around a complex to lower performing storage when there’s no need for high performance and higher performing data to higher performing storage when there is a need then that could help increase performance utilization considerably.   But, many storage systems already do this internally through automated storage tiering and even some can do this across storage systems using storage virtualization.

But the underlying problem here is that in takes a lot of time, resources and effort to move TBs of data around a data center, especially when its doing other work.  So other than something akin to storage tiering across storage systems we are unlikely to see much increase in storage performance utilization with a gaggle of multiple storage systems.  I suppose in the future moving TB of data may take much less time & resources than today but then the problem becomes moving PB of data around.

Software defined storage compared to network virtualization

When I compare the above capabilities to network virtualization it doesn’t look very similar.   There’s really no way to change the storage performance to optimize it for one direction (or application) at this instant and then move storage performance around to another application a couple of hours later.  Yes, again automated storage tiering can do this, and yes some of these systems can tier across storage systems using storage virtualization but in general barring storage tiering there’s nothing like this available today.  

Maybe if inside a storage system the data paths could somehow be programatically reconfigured to offer say more internal bandwidth to the Device-to-Cache path vs. the Cache-to-Frontend path. Changing or reconfiguring data path resources like this could certainly optimize the internal performance of a storage system and this would be a worthwhile feature of any software defined storage.  Knowing which is more important to one application and less important to all the others will take some smarts, across the storage system and host O/S but it’s certainly feasible.  So, with RESTful interfaces, APIs or application hints data paths could be reconfigurations on demand to support applications that are all vieing for IO activity.  

With these sorts of capabilities software defined storage starts to look a little more like software defined networking.

Software defined storage on its own

But in the end we always reach a fundamental limit of IO capabilities in today’s storage systems which is the devices. Yes you can have 2000 or more devices in high-end storage  today and yes you can have all-flash arrays. However, most storage systems are configured to keep whatever devices they have pretty busy as much of the time as possible.

Until we create some sort of storage device that can provide more performance than most applications can ever use, even when they are shared via a storage system, software defined storage capabilities will be limited.  Today’s SSDs have certainly boosted performance considerably but this just means that most applications that warrant all flash arrays are performing faster.  It just so happens that some applications can take all the performance you throw at them and still want more.

I suppose if SSDs cost were to come down to match NL-SAS storage prices and still maintain the 100X faster IOP rate, then maybe a storage system built on such devices could be more “software defined” than others.  And maybe that’s where everyone is headed, believing NAND/SSD price trends will drive costs down so much that everyone can have all the IOPS performance they will ever need out of a single storage system.

Yet, this still just looks like shared storage we have today, only more of it. So we return back to our roots and see that software defined storage is just another way to add more storage sharing. Storage virtualization is nice, new more programmatical storage systems is even better but faster-cheaper storage devices is best of all.

So what we really need is much cheaper SSDs to realize the full promise of software defined storage.   In the mean time opening up APIs and providing RESTful interfaces to provide programatic interfaces to provisioning, monitoring, managing and tuning storage system data paths and other performance characteristics are all we can hope for.

Comments?

 

 

 

OpenFlow part 2, Cisco’s response

 

organic growth by jurvetson
organic growth by jurvetson

Cisco’s CTO Padmasree Warrior, was interviewed today by NetworkWorld discussing their response to all the recent press on OpenFlow coming out of the Open Networking Summit (see my OpenFlow the next wave in networking post).  Apparently, Cisco is funding a new spin-in company to implement new networking technology congruent with Cisco’s current and future switches and routers.

Spin-in to the rescue

We have seen this act before, Andiamo was another Cisco spin-in company (brought back in ~2002), only this time focused on FC or SAN switching technology.  Andiamo was successful in that it created FC switch technology which allowed Cisco to go after the storage networking market and probably even helped them design and implement FCoE.

This time’s, a little different however. It’s in Cisco’s backyard, so to speak.  The new spin-in is called Insieme and will be focused on “OpenStack switch hardware and distributed data storage”.

Distributed data storage sounds a lot like cloud storage to me.  OpenStack seems to be an open source approach to define cloud computing systems. What all that has to do with software defined networking I am unable to understand.

Nonetheless, Cisco has invested $100M in the startup and have capped their acquisition cost at $750M if it succeeds.

But is it SDN?

Ms. Warrior does go on to discuss that software programmable switches will be integrated across Cisco’s product line sometime in the near future but says that OpenFlow and OpenStack are only two ways to do that. Other ways exist, such as adding  new features to NX-OS today or modifying their Nexus 1000v (software only, VMware based, virtual switch) they have been shipping since 2009.

As for OpenFlow commoditizing networking technology, Ms. Warrior doesn’t believe that any single technology is going to change the leadership in networking.  Programmability is certainly of interest to one segment of users with massive infrastructure but most data centers have no desire to program their own switches.  And in the end, networking success depends as much channels and goto market programs as it does on great technology.

Cisco’s CTO was reluctant to claim that Insieme was their response to SDN but it seems patently evident to the rest of us that it’s at least one of its objectives.  Something like this is a two edged sword, on the one hand it helps Cisco go after and help define the new technology on the other hand it legitimizes the current players.

~~~~

Nicira is probably rejoicing today what with all the news coming out of the Summit and the creation of Insieme.  Probably yet another reason not to label it SDN…

OpenFlow, the next wave in networking

OpenFlow Logo (from www.OpenFlow.org)
OpenFlow Logo (from http://www.OpenFlow.org)

Read two articles recently about how OpenFlow‘s Software Defined Networking is going to take over the networking world, just like VMware and it’s brethern have taken over the server world.

Essentially, OpenFlow is a network protocol that separates the control management of a networking switch or router (control plane) from it’s data path activities (data plane).  For most current switches, control management consists of vendor supplied,  special purpose software which differs for each and every vendor and sometimes even varies  across vendor product lines.

In contrast, data path activities are fairly similar for most of today’s switches and is generally implemented in custom hardware so as to be lightening fast.

However, the main problem with today’s routers and switches is that there is no standard way to talk or even modify the control management software to modify it’s data plane activities.

OpenFlow to the rescue

OpenFlow changes all that. First it specifies a protocol or interface between a switches control plane and it’s data plane.  This allows that control plane to run on any server and still provide management for a router or switch data path activities.  By doing this OpenFlow provides Software Defined Networking (SDN).

Once OpenFlow switches and control software are in place, the SDN can better control and manage networking activity to optimize for performance, utilization or any other number of parameters.

Products are starting to come out which support OpenFlow protocols.  For example, a new OpenFlow compatible ethernet switch is available from IBM (their RackSwitch G8264 & G8264T) and HP has recently released OpenFlow software for their ethernet switches (see OpenFlow blog post).  At least some in the industry are starting to see the light.

Google implements OpenFlow

The surprising thing is that one article I read recently is about Google running an OpenFlow network on it’s data center backbone (see Wired’s Google goes with the Flow article).   In the article it discusses how a top Google scientist talked about how they implemented OpenFlow for their internal network architecture at the Open Networking Summit yesterday.

Google’s internal network connects it’s multiple data centers together to provide Google Apps and other web services.  Apparently, Google has been secretly creating/buying OpenFlow networking equipment and creating it’s own OpenFlow software. This new SDN they have constructed has given them the ability to change their internal network backbone in minutes which would have taken days, weeks or even months before. Also, OpenFlow has given Google the ability to simulate network changes ahead of time allowing them to see what potential changes will do for them.

One key metric is that Google now runs their backbone network close to 100% utilized at all times whereas before they worked hard to get it to 30-40% utilization.

Nicira revolutionizes networking

The other article I read was about a startup called Nicira out of Palo Alto, CA which is taking OpenFlow to the next level by defining a Network Virtual Platform (NVP) and Open vSwitches (OVS).

  • A NVP  is a network virtualization platform controller which consists of cluster of x86 servers running the network virtualization control software providing a RESTful web services API and defines/manages virtual networks.
  • An OVS is an Open vSwitch software designed for remote control that either runs as a complete software only service in various hypervisors or as gateway software connecting VLANs running on proprietary vendor hardware to the SDN.

OVS gateway services can be used with current generation switches/routers or be used with high performing, simple L3 switches specifically designed for OpenFlow management.

Nonetheless, with NVP and OVS deployed over your networking hardware it removes many of the limitations inherent in current networking services.  For example, Nicira network virtualization, allows the movement of application workloads across subnets while maintaining L2 adjacency, scalable multi-tenant isolation and the ability to repurpose physical infrastrucuture on demand.

By virtualizing the network, the network switching/router hardware becomes a pool of IP-switching services, available to be repurposed and/or reprogrammed at a moments notice.  Not unlike what VMware did with servers through virtualization.

Customers for Nicira include eBay, RackSpace and AT&T to name just a few.  It seems that networking virtualization is especially valuable to big web services and cloud services companies.

~~~~

Virtualization takes on another industry, this time networking and changes it forever.

We really need something like OpenFlow for storage.  Taking storage administration out of the vendor hands and placing it elsewhere.  Defining an open storage management protocol that all storage vendors would honor.

The main problem with storage virtualization today is it’s kind of like VLANs, all vendor specific.   Without, something like a standard protocol, that proscribes a storage management plane’s capabilities and a storage data plane’s capabilities we can not really have storage virtualization.