StorPool, fast storage for fast times

At Storage Field Day 18 (SFD18) a couple of weeks ago, we heard from a new company, StorPool, that provides ultra fast software defined storage for MSPs and other cloud providers. You can watch the videos of their sessions here.

Didn’t know what to make of them at first, but when they started demoing their performance, we all woke up. They ran an all read and mixed read-write IO workload, that almost blew away any other proprietary/non-proprietary storage I’ve seen before.

[Updated 12Mar2019] What they were trying to achieve was to match the performance of an Windows Server 2019 Hyper V benchmark which hit 13.8 M IOPS using 12 nodes of 384GB DRAM, 1.5TB Optane DC persistent memory, 32TB (4X8TB) NVMe SSDs and Mellanox 25Gbps RDMA ethernet, with each VM running on the server that had the VHDX file stored.

Their demo showed 70:30 R:W random 4KB mixed workload and achieved 1M IOPS with a read latency of 140 microsec. and write latency of 100 microsec. (end to end at the VM level). [Updated 12Mar2019] They were able to match the performance of a published benchmark without the 1.5TB Optane memory, without the 25Gbps RDMA Ethernet and without having the VMs and its storage running on same nodes. They were able to show this performance running StorPool, KVM and CentOS 7 across 12 nodes running both VMs and storage services.

They also showed information on a pgbench benchmark, which I was not familiar with. The chart had response times on the horizontal axis and TPS performance on the vertical axis.

What’s even more amazing is that even with the great performance they still offer reasonable data services such as CoW snapshots, asynchronous replication (with changed block tracking), thin provisioning, end to end data integrity, and iSCSI support.

Their target market is mostly MSPs and large customers moving to the private cloud configurations. They mentioned deep support for OpenNebula, [updated 12Mar2019] OpenStack, OnApp and Kubernetes which means each virtual disk is a volume/LUN. They support VMware and Windows Server/Hyper-V through iSCSI.

~~~~

The fact that they have a proprietary protocol is not that great but if they can generate the IOPS and response times they showed here with snapshot, thin provisioning and asynch replication, I’m ok with it. [Updated 12Mar2019]The fact that they were able to match the performance of the more expensive system with standard Ethernet, no Optane memory and all VMs running remote made a significant impression on me.

Want to learn more, check out these other discussions on StorPool (and other SFD18 vendors):

SFD18 – as intense as it gets by Max Mortillaro (@DarkAvenger), and

Podcast #3 review the SFD18 presenters by Chris M. Evans (@ChrisMEvans) and Matt Leib (@MBLeib).

[Updated 12Mar2019 Boyan Krosnov sent me an email indicating that the post had made some mistakes in the post which were corrected via updates above. Editors ]

Hedvig storage system, Docker support & data protection that spans data centers

Hedvig003We talked with Hedvig (@HedvigInc) at Storage Field Day 10 (SFD10), a month or so ago and had a detailed deep dive into their technology. (Check out the videos of their sessions here.)

Hedvig implements a software defined storage solution that runs on X86 or ARM processors and depends on a storage proxy operating in a hypervisor host (as a VM) and storage service nodes. Their proxy and the storage services can execute as separate VMs on the same host in a hyper-converged fashion or on different nodes as a separate storage cluster with hosts doing IO to the storage cluster.

Hedvig’s management team comes from hyper-scale environments (Amazon Dynamo/Facebook Cassandra) so they have lots of experience implementing distributed software defined storage at (hyper-)scale.
Continue reading “Hedvig storage system, Docker support & data protection that spans data centers”

Springpath SDS springs forth

Springpath presented at SFD7 and has a new Software Defined Storage (SDS) that attempts to provide the richness of enterprise storage in a SDS solution running on commodity hardware. I would encourage you to watch the SFD7 video stream if you want to learn more about them.

HALO software

Their core storage architecture is called HALO which stands for Hardware Agnostic Log-structured Object store. We have discussed log-structured file systems before. They are essentially a sequential file that can be randomly accessed (read) but are sequentially written. Springpath HALO was written from scratch, operates in user space and unlike many SDS solutions, has no dependencies on Linux file systems.

HALO supports both data deduplication and compression to reduce storage footprint. The other unusual feature  is that they support both blade servers and standalone (rack) servers as storage/compute nodes.

Tiers of storage

Each storage node can optionally have SSDs as a persistent cache, holding write data and metadata log. Storage nodes can also hold disk drives used as a persistent final tier of storage. For blade servers, with limited drive slots, one can configure blades as part of a caching tier by using SSDs or PCIe Flash.

All data is written to the (replicated) caching tier before the host is signaled the operation is complete. Write data is destaged from the caching tier to capacity tier over time, as the caching tier fills up. Data reduction (compression/deduplication) is done at destage.

The caching tier also holds read cached data that is frequently read. The caching tier also has a non-persistent segment in server RAM.

Write data is distributed across caching nodes via a hashing mechanism which allocates portions of an address space across nodes. But during cache destage, the data can be independently spread and replicated across any capacity node, based on node free space available.  This is made possible by their file system meta-data information.

The capacity tier is split up into data and a meta-data partitions. Meta-data is also present in the caching tier. Data is deduplicated and compressed at destage, but when read back into cache it’s de-compressed only. Both capacity tier and caching tier nodes can have different capacities.

HALO has some specific optimizations for flash writing which includes always writing a full SSD/NAND page and using TRIM commands to free up flash pages that are no longer being used.

HALO SDS packaging under different Hypervisors

In Linux & OpenStack environments they run the whole storage stack in Docker containers primarily for image management/deployment, including rolling upgrade management.

In VMware and HyperVM, Springpath runs as a VM and uses direct path IO to access the storage. For VMware Springpath looks like an NFSv3 datastore with VAAI and VVOL support. In Hyper-V Springpath’s SDS is an SMB storage device.

For KVM its an NFS storage, for OpenStack one can use NFS or they have a CINDER plugin for volume support.

The nice thing about Springpath is you can build a cluster of storage nodes that consists of VMware, HyperV and bare metal Linux nodes that supports all of them. (Does this mean it’s multi protocol, supporting SMB for Hyper-V, NFSv3 for VMware?)

HALO internals

Springpath supports (mostly) file, block (via Cinder driver) and object access protocols. Backend caching and capacity tier all uses a log structured file structure internally to stripe data across all the capacity and caching nodes.  Data compression works very well with log structured file systems.

All customer data is supported internally as objects. HALO has a write-log which is spread across their caching tier and a capacity-log which is spread across the capacity tier.

Data is automatically re-balanced across nodes when new nodes are added or old nodes deleted from the cluster.

Data is protected via replication. The system uses a minimum of 3 SSD nodes and 3 drive (capacity) nodes but these can reside on the same servers to be fully operational. However, the replication factor can be configured to be less than 3 if you’re willing to live with the potential loss of data.

Their system supports both snapshots (2**64 times/object) and storage clones for test dev and backup requirements.

Springpath seems to have quite a lot of functionality for a SDS. Although, native FC & iSCSI support is lacking. For a file based, SDS for hypbervisors, it seems to have a lot of the bases covered.

Comments?

Other SFD7 blogger posts on Springpath:

Picture credit(s): Architectural layout (from SpringpathInc.com) 

Free & frictionless and sometimes open sourced

IMG_4467I was at EMCWorld2015 (see my posts on the day 1 news and day 2&3 news) and IBMEdge2015 this past month and there was a lot of news on software defined storage. And it turns out I was at an HP Storage Deep Dive the previous month and they also spoke on the topic.

One key aspect of software defined storage is how customers can consume the product. I’m not talking about licensing but rather about product trial-ability. One approach championed by HP, EMC, IBM and others is to offer their software defined storage in a new way.

Free & frictionless?

Howard Marks (@DeepStorageNetDeepStorage.net) and I, had Chad Sakac (@sakacc, VirtualGeek) were on a recent GreyBeards on Storage podcast to discuss the news coming out of EMCWorld2015 and he used the term free & frictionless as a new approach to offering  emerging technology software only storage solutions.

  • Frictionless refers to not having to encounter a sales person and not having to provide a lot of information to gain access to a software download. Frictionless is a matter of degree: at one extreme all you have is a direct link to a software download and it fires up without any registration requirements whatsoever; and at the other end, you have to fill out a couple of pages about your company and your plans for the product.
  • Free refers to the ability to use the product for free in limited situations (e.g., test & development) but requires a full paid for license and support contracts when used outside these limitations.

For example:

  • Microsoft Windows Storage Server 2012 is available for a free 180-day evaluation and can be directly download. I was able to download it without having to supply any information whatsoever. Unfortunately, I don’t have any Windows Server hardware floating around that I could use to see if there was any further registration requirements for it.
  • HP StoreVirtual VSA and StoreOnce VSA are both available for a 60-day, free trial offer, downloadable from the StoreVirtual VSA and StoreOnce VSA websites. StoreVirtual VSA is also available for an free, 1TB/3-year license. You have to register for this last option and all three options require an HP Passport account to download the software. Didn’t have an HP Passport account so don’t know what else was required.
  • VMware Virtual SAN is available for a 60-day, free trial offer (with no capacity or other use restrictions). You will need a 3-server vSphere cluster so you also get vSphere and vCenter server software for free at the download website.  You will need a VMware account in order to download the software, beyond that, it’s unclear to me what’s required.
  • EMC ScaleIO will be available for free when used for test and development, by the end of this month. There is no limit on the time you can use the product, no limit on the amount of storage that can be defined and no limit on the number of servers it’s deployed on. Although the website for EMC’s ScaleIO download was up, there was no download link active on the page yet. So I can’t say what’s required to access the download.
  • IBM Spectrum Accelerate (software-only version of XIV) is going to be available for a 90-day, free trial offer. As far as I know you can do what you like with it for 90-days. I couldn’t find any links on their website for the download but it was just announced last week at IBMEdge2015.

I couldn’t find any information on an Hitachi or a NetApp software defined storage solution free trial offer but could have missed them in my searches.

There are plenty of other software defined storage solutions out there including Maxta, NexentaSpringPath, and probably a dozen others, many of which provide free trial offers. Not to mention software defined object/file systems such as Ceph, Gluster, Lustre, etc.

… And sometimes Open Source

One other item of interest out of EMCWorld2015 this month was that ViPR Controller is being open sourced as Project CoprHD (on GitHub). Its source code is scheduled to be loaded around June.

EMC, IBM, HDS, NetApp, VMware and others have all been very active in open source in the past, in areas such as storage support in Linux, OpenStack and other projects. But outside of Pivotal (an EMC Federation company), most of them have not open sourced a real product.

I believe it was Paul Maritz, CEO Pivotal who said on stage, that one reason to open source a project is to help to create an eco-system around it.

EMC open sourced ViPR Controller primarily to add even more development resources to enhance the solution. The other consideration was that customers adopting ViPR Controller in their data centers were concerned about vendor lock-in. Open sourcing ViPR Controller addresses both of these issues.

My understanding is that Project CoprHD will be under a Mozilla Public License (MPL 2.0) as standalone project. Customers can now add any storage system support they want and anyone that’s afraid of lock-in can download the software and modify it themselves. MPL 2.0 supports a copyleft style of licensing, which essentially means anyone can modify the source code but any derivative work must be licensed under MPL as well.

My understanding is that ViPR Controller will still be available as a commercial product.

~~~~

From my perspective it all seems to make a lot of sense. Customers creating new applications that could use software defined storage want access to the product for free to try it out to see what it can and can’t do.

EMC’s taken a lead in offering their’s for free in test and dev situations, we’ll see if the others go along with them.

Comments?

 

 

 

 

 

 

 

 

EMCWorld2015 Day 2&3 news

Some additional news from EMCWorld2015 this week:

IMG_4527 IMG_4528 IMG_4531EMC announced directed availability for DSSD, their Rack scale shared Flash storage solution using a PCIe3 (switched) fabric with 36 dual ported, flash modules, which hold 512 NAND chips for 144TB NAND flash storage. On the stage floor they had a demonstration pitting a  40 node Hadoop cluster with DAS against a 15 node Hadoop cluster using the DSSD, both running HIVE and working on the same Query. By the time the 40node/DAS solution got to about 2% of the query completion the 15node/DSSD based cluster had finished the query without breaking a sweat. They then ran an even more complex query and it took no time at all.

They also simulated a copy of a 4TB file (~32K-128K IOs) from memory to memory and it took literally seconds, then copied it to SSD that took considerably longer (didn’t catch how long but much longer than memory), and then they showed the same file copy to DSSD and it only took seconds, almost looked exactly a smidgen slower than the memory to memory copy.

They said the PCIe fabric (no indication what the driver was) provided much more parallelism to the dual ported flash storage that the system was almost able to complete the 4TB copy at memory to memory speeds. It was all pretty impressive, albeit a simulation of the real thing.

EMC indicated that they designed the flash modules themselves and expect to double capacity of the DSSD to 288TB shortly. They showed the controller board that had a mezzanine board over a part of it, but together had 12 major chips on it which I assume had something to do with the PCIe fabric. They said there were two controllers in the system for high availability and the 144TB DSSD was deployed in 5U of space.

I can see how this would play well for real time analytics, high frequency trading and HPC environments but there’s more to shared storage than just speed. Cost wasn’t mentioned neither was the software driver but with the ease with which it worked on the Hive query, I can only assume at some lever it must look something like a DAS device but with memory access times… NVMe anyone?

Project CoprHD was announced which open sourced EMC’s ViPR Controller software. Many ViPR customers were asking for EMC to open source ViPR controller, apparently their listening. Hopefully this will enable some participation from non-EMC storage vendors to allow their storage to be brought under the management of ViPR Controller. I believe the intent is to have an EMC hardened/supported version of Project CoprHD or ViPR Controller to coexist with the open source project version which anyone can download and modify for themselves.

A Non-production, downloadable version of ScaleIO was also announced. The test-dev version is a free download with unlimited capacity, full functionality and available for an unlimited time but only for non-production use.  Another of the demos onstage this morning was Chad configuring storage across a ScaleIO cluster and using its QoS services to limit the impact of a specific workload. There was talk that ScaleIO was available previously as a free download but it took a bunch of effort to find and download. They have removed all these prior hindrances and soon, if not today it’s freely available for anyone. ScaleIO runs on VMware and other hypervisors (maybe bare metal as well). So if you wanted to get your feet wet with software defined storage, this sounds like the perfect opportunity.

ECS is being added to EMC’s Data Lake foundation. Not exactly sure what are all the components in the data lake solution but previously the only Data Lake storage was Isilon based. This week EMC added Elastic Cloud Storage to the picture. Recall that Elastic Cloud Storage comes in either a software only or hardware appliance deployment and provides object storage.

I missed Project Liberty before but it’s a virtual VNX appliance, software only version.  I assume this is intended for ROBO deployments or very low end business environments. Presumably it runs on VMware and has some sort of storage limitations. It seems, more and more of EMC products are coming out in virtual appliance versions.

Project Falcon was also announced which is a virtual Data Domain appliance, software only solution, targeted for ROBO environments and other small enterprises. The intent is to have an onramp for DataDomain backup storage.  I assume runs under VMware.

Project Caspian – rolling out CloudScaling orchestration/automation for OpenStack deployments. On the big stage today, Chad and Jeremy demonstrated Project Caspian on a VCE VxRACK deploying racks of servers under OpenStack control. They were able within a couple of clicks define and deploy openstack on bare metal hardware and deploy applications to the OpenStack servers. They had a monitoring screen which showed the OpenStack server activity (transactions) in real time and showed an over commit of the rack and how easy it was to add a new rack with more servers. All this seemed to take but a few clicks. The intent is not to create another OpenStack distribution but to provide an orchestration/automation/monitoring layer of software on top of OpenStack to “industrialize OpenStack” for enterprise users. Looked pretty impressive to me.

I would have to say the DSSD box was most impressive. It would have been interesting to get an upclose look at the box with some more specifications but they didn’t have one on the Expo floor.

EMC ViPR virtues & vexations, but no virtualization

EMC ViPR went GA (general availability) last week. You may recall that EMC announced ViPR at EMCWorld2013 (see my blog post on the EMCWorld2013 day 1). At that time details were a bit sketchy but more information was provided earlier last week as a preview to going GA.

ViPR is first and foremost a framework for managing heterogenous storage systems. With ViPR in place you can do all your storage provisioning, monitoring and management through ViPR operating panels and policy automation.

At GA ViPR supports EMC VMAX, VNX, VPLEX, Isilon, RecoverPoint and NetApp storage systems. Over time EMC will add more storage systems and commodity storage solutions as they are requested by their customer base.

ViPR Virtues

Essentially ViPR has split open the control and data planes of enterprise storage. The first iteration works mostly at the control plane level but the framework of control and data plane isolation should work just as well here as it does for software defined networking.  ViPR is releasing just one data plane service at GA, that being an object storage interface to storage under management.

I am always surprised by how far storage management has advanced over the past decade or so. The last time I defined volumes was using an op-panel on a storage system with soft keys. Today everything can be done via GUIs or CLI scripts. But most customers say it’s still not enough. I was on a podcast just last week with Howard Marks, DeepStorage Founder and Chief Scientist, and  Dheeraj Pandey, CEO of Nutanix. Dheeraj mentioned that “the enemy of IT is time.” By that he meant that the time to deploy services is always too long and needs to shrink considerably.

Well with ViPR the hope is that you no longer have to understand five or more different operational environments. Rather you can get by with just an understanding of ViPR storage management, leaving the complexities of dealing with individual underlying storage operations consoles to ViPR from now on.

ViPR can handle the reporting, configuring/provisioning, snapshotting (I believe), setting up/tearing down replication activities and most of the other mundane tasks of managing storage solutions today. In doing this it reduces of the requirements for special purpose storage admins (NetApp, VMAX, VNX) and transitions them to being ViPR admins, which makes them more universally applicable. I suppose you may still need to configure new storage systems with a minimal interface and a single LUN/file system the old fashion way and then ViPR can take over from there.

When ViPR supports commodity storage the intent is that ViPR control and data plane services will support snapshot and replication services using commodity storage alone.

Replication services and BC/DR capabilities are being supplied through RecoverPoint and VPLEX for the moment for the underlying storage but eventually ViPR should be able to handle the native storage system services for these facilities as well.

ViPR is written in Java and EMC expects to release additional functionality in a more incremental fashion than they have for past products.

As for the data plane, ViPR starts out with support for object storage interface which can be used with VMAX, VNX, NetApp storage solutions. The object storage data service is just the first of many planned data services, with HDFS scheduled to come out before the end of the year. And a file service planned after that.

ViPR ships as a VMware vAPP. It’s a 100% software solution and EMC will provide a test (non-production) version free of charge to anybody and Academic organizations can have ViPR for production use for free as well.

EMC is also planning to publish all of ViPR’s API information and will foster an Open Platform approach to extending ViPR functionality. They don’t intend to be the only developers on this platform.

They also plan to offer a ViPR Online Service offering for ViPR developers that has all the storage that is currently supported behind a ViPR cluster that can be accessed to test ViPR enhancements in real time with real hardware.

The base ViPR control plane is licensed on a GB/month basis with price breaks for more capacity under management.  Pricing for the base Controller Platform will start at $0.01/GB/Month and will go down to $0.005/GB/Month at the highest capacity levels.  If you want the Object Data Service as well as the Controller Platform the pricing starts at $0.02/GB/Month and goes down to $0.01/GB/Month.  To get a true picture of TCO one needs to add the cost of the underlying storage to ViPR’s price.

ViPR Vexations

Java is very flexible and easier to develop functionality in,  but using it for data services seems a bit of a stretch. While object and perhaps HDFS may not have problems with reduced response latencies, it’s unclear what other data services can support similar responsiveness. Yes flash everywhere can help and maybe that becomes the solution if you want better responsiveness but ViPR data services today don’t seem to support server side flash.

CDMI and SMI-S are not being mentioned anywhere here. These industry standard approaches to managing storage or cloud data should be a critical early deliverable. EMC says their customers aren’t asking for them which is why there not being planned for.  But ViPR represents yet another set of APIs, developers will need to code to support storage  management. I suppose the upside is that EMC will take the lower half of this API on for themselves to support enterprise class storage systems.

Is EMC the right company to create and support ViPR in the future?

As ViPR moves beyond VMware into OpenStack, Hyper-V and other virtualization solutions, VMware becomes a less likely company to host this effort and EMC is probably the only place in the EMC family of companies, where it continues to make sense. Nonetheless,  it seems EMC would not be the best place to host an open storage management framework. For example, the lack of Dell, HDS, HP and IBM storage support under ViPR probably has more to do with EMC’s current install base rather than storage strict popularity in hypervisor installations.

ViPR seems to take a somewhat constrained approach to defining software defined storage. I think splitting the control and data plane for storage makes a great deal of sense but until you get into the data path, some of what you want to do is much harder than it needs to be.

EMC mentioned data gravity as being the thing that differentiates storage from networking and server virtualization solutions. By gravity I assume they mean the difficulty in moving TB of data around a data center. It looks like EMC is going to try to support data migrations like this using a sort of control and data plane hybrid solution but it’s hard for me to see how this could be provided non-disruptively without being more intimately involved in the data path/data plane.

ViPR is not storage virtualization

When I think of software defined storage I think of more automation, more heterogeneous management and extreme storage virtualization. ViPR has the first two but expressly ignores the third. I suppose if you want storage virtualization you can use NetApp, VMAX or even VPLEX to do it underneath the covers of ViPR but it seems to me that an essential data service would be (virtualized) block storage and for that matter virtualized file services..

Yes, it’s harder to do, and yes it’s ultimately a potential performance bottleneck and functionality choke point but what we all want is storage virtualization and better management automation. ViPR really doesn’t deliver on that half of this.

~~~~

ViPR is an interesting approach. EMC seems to be of the opinion that better management is the thing that most customers want and they may be right. Better, more universal storage management can help storage admins to be more productive, reducing their time and effort. With better management comes more automation in my mind and ViPR has the promise of more automation by publishing their APIs and providing a more standardized CLI support for more storage. This will help too.

The lack of storage virtualization is another matter. I suppose when you get down to it, storage virtualization provides better centralized management and non-disruptive migration of data. ViPR already provides the better management piece of the puzzle. If they can also supply highly optimized, non-disruptive data migration, without encountering all the problems of storage virtualization maybe this is the way to go.

Stay tuned as EMC ViPR rolls out more functionality.

Photo Credit(s): Dodge viper seen from the front by Martina Rathgens

Storage changes in vSphere 5.5 announced at VMworld 2013

Pat Gelsinger, VMworld2013 Keynote, vSphere 5.5 storage changesVMworld2013 is going on in San Francisco this week. The big news is the roll out of network virtualization in NSX and vCloud Hybrid Service (vCHS) but there were a few tidbits in the storage arena worth discussing.

  • Virtual SAN public beta – VSAN was released as a public beta and customers can now download a copy of VSAN from www.vsanbeta.com. VSAN will construct a pool of storage out of local attached disks and flash across two or more hosts. It uses the flash as a read-write cache for the local disks. With VSAN customers can elect to have multiple tiers of storage be supported within a single VSAN pool, as well as support different availability (replication) levels, and some other, select characteristics. VSAN can easily scale in performance and capacity by just adding more hosts that have local storage. Now all that stranded local storage and flash server level resources can be used as a VM storage pool. VMware stated that they see VSAN as usefull for tier 2/tier 3 application storage and/or backup-archive storage uses. However they showed one chart with a View Planner application simulation using a 3-host VSAN (presumably with lots of SSD and disk storage) compared against an all-flash array (vendor unknown). In this benchmark the VSAN exactly matched the all-flash external storage in performance (VMs supported). [late update] Lot’s of debate on what VSAN means to enterprise storage but it appears to be a limited in scope and mainly focused on SMB applications.  Chad Sakac did a (real) lengthy post on EMC’s perspective on VSAN and Software Defined Storage if you want to know more check it out.
  • Virsto – VMware announced GA of Virsto which uses any external storage and creates a new global storage pool out of them. Apparently, it maps a log structured file system across the external SAN storage. By doing this it sequentializes all the random write IO coming off of ESX hosts. It supports thin provisioning, snapshot and read-write clones. One could see this as almost a write cache for VM IO activity but read IOs are also by definition spread across (extremely wide striped) across the storage pool which should improve read performance as well. You configure external storage as normal and present those LUNs to Virsto which then converts that storage pool into “vDisks” which can then be configured as VM storage. Probably more to see here but it’s available today. Before acquisition one had to install Virsto into each physical host that was going to define VMs using Virsto vDisks. It’s unclear how much Virsto has been integrated into the hypervisor but over time one would assume like VSAN this would be buried underneath the hypervisor and be available to any vSphere host.
  • vSphere Flash Read Cache – customers with PCIe flash cards and vCenter Ops Manager, can now use them to support a read cache for data access. vSphere Flash Read Cache is apparently vmotion aware such that as you move VMs from one ESX host to another the read cache buffer will move with it. Flash Read Cache is transparent to the VMs and can be assigned on a VMDK basis.
  • vSphere 5.5 low-latency support – unclear what VMware actually did but they now claim vSphere 5.5 now supports low latency applications, like FinServ apps. They claim to have reduced the “jitter” or variability in IO latency that was present in previous versions of vSphere. Presumably they shortened the IO and networking paths through the hypervisor which should help.  I suppose if you have a VMDK which ends up on an SSD storage someplace one can have a more predictable response time. But the critical question is how much overhead does the hypervisor IO path add to the base O/S. When all-flash arrays now sporting latencies under 100 µsecs, adding another 10 or 100 µsecs can make a big difference. In VMware’s quest to virtualize any and all mission critical apps, low-latency apps are one of the last bastions of physical server apps left to conquer. Consider this a step to accommodate them.
  • vVols – VMware keeps talking about vVols as an attempt to extend their VSAN “policy driven control plane” functionality out to networked storage but there’s still no GA yet. The (VASA 2 or vVol) spec’s seem to be out for awhile now, and I have heard from at least two “major” vendors that they have support in place today but VMware still isn’t announcing formal availability yet. Unclear what the hold up is, but maybe the spec’s are more in a state of flux than what’s depicted externally.

Most of this week was spent talking about NSX, VMware’ network virtualization and vCloud Hybrid Services. When they flashed the list of NSX partners on the screen Cisco was absent. Not sure what this means but perhaps there’s some concern that NSX will take revenue away from Cisco.

As for vCHS apparently this is a VMware run public cloud with two now expanding to three data centers in US, that customers can use to support their own hybrid cloud services. VMware announced that SAVVIS is now offering vCHS services as well as VMware with data centers in NY and Chicago.  There was some talk about vCHS offering object storage services like Amazon’s S3 but there was nothing specific about when. [Late update] Pat did mention that a future offering will provide DR-as-a-Service using vCHS as a target for SRM. That seems to be matching what Microsoft seems to be planning for Azzure and Hyper-V DR.

That’s about it as far as I can tell. Didn’t hear any other news on storage changes in vSphere 5.5. But this is the year of network virtualization. Can’t wait to see what they roll out next year.

The promise of software defined storage

Data hypervisor, software defined storage, data plane, control plane
(c) 2012 Silverton Consulting, Inc. All rights reserved

Not sure why but all the hype around software defined storage seems to be reaching a crescendo.  Possible due to conference season coming up but it started earlier this year.  I attended an SNW analyst session that was talking about software defined storage had on its panel technical people from HDS, IBM, Data Core and VMware.  It seems the distinction between storage virtualization and software defined storage is getting slimmer every time we talk about it.  I have written before about software defined storage (see my Data Hypervisor post).

Server, networking and storage virtualization today

Server virtualization makes an awful lot of sense, has made lots of money and arguably been around for decades now especially in mainframe systems.  Servers have so much power today that dedicating one to a single workload just doesn’t make any sense anymore.

Network virtualization from OpenFlow and others also makes a lot of sense (see OpenFlow the next wave in networking and OpenFlow part 2, Cisco’s response posts). Here we aren’t necessarily boosting network utilization as much as changing resource allocation to deal with altered traffic flows.  That and the fact that provisioning, monitoring and other management characteristics can now be under pragmatic control from the user makes these systems very appealing. Especially, to organizations that exhibit varying network activity over time.

Storage virtualization has been around for a long time too and essentially places a storage system abstraction layer on top of a group of other, heterogeneous storage systems. This provides a number of capabilities such as allowing data to be migrated from one storage system to another without host knowledge or intervention.  Other storage virtualization features include, centralized, management, common storage features, different storage personalities (protocols), etc. But just being able to migrate data from one storage system to another without host intervention or knowledge provides an awful lot of value, especially to large data centers which refresh technology frequently.

Software defined storage compared to server virtualization

Software defined storage seems to imply some ability to marry storage virtualization services to RESTful and other APIs which would allow programatic storage provisioning, monitoring and management.  This would allow data centers to manage and control their storage without involving storage administrators in day-to-day activities.

When I compare this to server virtualization the above described capabilities really don’t increase storage utilization much.  Yes, by automating provisioning or even running thin provisioning one can potentially boost storage capacity utilization but you really haven’t increased the IO utilization much by doing this.

Looking under the covers of most storage systems one might find that CPU cores are pretty idle, but data paths and storage devices are typically running flat out.  One problem is that today’s enterprise storage subsystems are already highly shared across applications and users.  So there is really no barrier to sharing these resources as widely as they can.   As such, storage system IOPS and/or bandwidth utilization is already pretty high.   I would say a typical enterprise application environment storage subsystem performance usually runs above 30% and reaching 50% or more during peak time periods. Increasing IOPS utilization much beyond that risks seriously impacting peak performance periods.

Now if somehow one could migrate slower data around a complex to lower performing storage when there’s no need for high performance and higher performing data to higher performing storage when there is a need then that could help increase performance utilization considerably.   But, many storage systems already do this internally through automated storage tiering and even some can do this across storage systems using storage virtualization.

But the underlying problem here is that in takes a lot of time, resources and effort to move TBs of data around a data center, especially when its doing other work.  So other than something akin to storage tiering across storage systems we are unlikely to see much increase in storage performance utilization with a gaggle of multiple storage systems.  I suppose in the future moving TB of data may take much less time & resources than today but then the problem becomes moving PB of data around.

Software defined storage compared to network virtualization

When I compare the above capabilities to network virtualization it doesn’t look very similar.   There’s really no way to change the storage performance to optimize it for one direction (or application) at this instant and then move storage performance around to another application a couple of hours later.  Yes, again automated storage tiering can do this, and yes some of these systems can tier across storage systems using storage virtualization but in general barring storage tiering there’s nothing like this available today.  

Maybe if inside a storage system the data paths could somehow be programatically reconfigured to offer say more internal bandwidth to the Device-to-Cache path vs. the Cache-to-Frontend path. Changing or reconfiguring data path resources like this could certainly optimize the internal performance of a storage system and this would be a worthwhile feature of any software defined storage.  Knowing which is more important to one application and less important to all the others will take some smarts, across the storage system and host O/S but it’s certainly feasible.  So, with RESTful interfaces, APIs or application hints data paths could be reconfigurations on demand to support applications that are all vieing for IO activity.  

With these sorts of capabilities software defined storage starts to look a little more like software defined networking.

Software defined storage on its own

But in the end we always reach a fundamental limit of IO capabilities in today’s storage systems which is the devices. Yes you can have 2000 or more devices in high-end storage  today and yes you can have all-flash arrays. However, most storage systems are configured to keep whatever devices they have pretty busy as much of the time as possible.

Until we create some sort of storage device that can provide more performance than most applications can ever use, even when they are shared via a storage system, software defined storage capabilities will be limited.  Today’s SSDs have certainly boosted performance considerably but this just means that most applications that warrant all flash arrays are performing faster.  It just so happens that some applications can take all the performance you throw at them and still want more.

I suppose if SSDs cost were to come down to match NL-SAS storage prices and still maintain the 100X faster IOP rate, then maybe a storage system built on such devices could be more “software defined” than others.  And maybe that’s where everyone is headed, believing NAND/SSD price trends will drive costs down so much that everyone can have all the IOPS performance they will ever need out of a single storage system.

Yet, this still just looks like shared storage we have today, only more of it. So we return back to our roots and see that software defined storage is just another way to add more storage sharing. Storage virtualization is nice, new more programmatical storage systems is even better but faster-cheaper storage devices is best of all.

So what we really need is much cheaper SSDs to realize the full promise of software defined storage.   In the mean time opening up APIs and providing RESTful interfaces to provide programatic interfaces to provisioning, monitoring, managing and tuning storage system data paths and other performance characteristics are all we can hope for.

Comments?