SCISFS121227-010(001) (c) 2013 Silverton Consulting, Inc. All Rights Reserved
We return to our perennial quest to understand file storage system performance and our views on NFS vs. CIFS performance. As you may recall, SPECsfs2008 believes that there is no way to compare the two protocols because
CIFS/SMB is “statefull” and NFS is “state-less”
The two protocols are issuing different requests.
Nonetheless, I feel it’s important to go beyond these concerns and see if there is any way to assess the relative performance of the two protocols. But first a couple of caveats on the above chart:
There are 25 CIFS/SMB submissions and most of them are for SMB environments vs. 64 NFS submissions which are all over the map
There are about 12 systems that have submitted exact same configurations for CIFS?SMB and NFS SPECsfs2008 benchmarks.
This chart does not include any SSD or FlashCache systems, just disk drive only file storage.
All that being said, let us now see what the plot has to tell us. First the regression line is computed by Excel and is a linear regression. The regression coefficient for CIFS/SMB is much better at 0.98 vs NFS 0.80. But this just means that their is a better correlation between CIFS/SMB throughput operations per second to the number of disk drives in the benchmark submission than seen in NFS.
Second, the equation and slope for the two lines is a clear indicator that CIFS/SMB provides more throughput operations per second per disk than NFS. What this tells me is that given the same hardware, all things being equal the CIFS/SMB protocol should perform better than NFS protocol for file storage access.
Just for the record the CIFS/SMB version used by SPECsfs2008 is currently SMB2 and the NFS version is NFSv3. SMB3 was just released last year by Microsoft and there aren’t that many vendors (other than Windows Server 2012) that support it in the field yet and SPECsfs2008 has yet to adopt it as well. NFSv4 has been out now since 2000 but SPECsfs2008 and most vendors never adopted it. NFSv4.1 came out in 2010 and still has little new adoption.
So these results are based on older, but current versions of both protocols available in the market today.
So, given all that, if I had an option I would run CIFS/SMB protocol for my file storage.
Comments?
More information on SPECsfs2008 performance results as well as our NFS and CIFS/SMB ChampionsCharts™ for file storage systems can be found in our NAS Buying Guide available for purchase on our web site.
~~~~
The complete SPECsfs2008 performance report went out in SCI’s December newsletter. But a copy of the report will be posted on our dispatches page sometime this month (if all goes well). However, you can get the latest storage performance analysis now and subscribe to future free newsletters by just using the signup form above right.
As always, we welcome any suggestions or comments on how to improve our SPECsfs2008 performance reports or any of our other storage performance analyses.
[Full disclosure: I helped develop the underlying hardware for VSM 1-3 and also way back, worked on HSC for StorageTek libraries.]
Virtual Storage Manager System 6 (VSM6) is here. Not exactly sure when VSM5 or VSM5E were released but it seems like an awful long time in Internet years. The new VSM6 migrates the platform to Solaris software and hardware while expanding capacity and improving performance.
What’s VSM?
Oracle StorageTek VSM is a virtual tape system for mainframe, System z environments. It provides a multi-tiered storage system which includes both physical disk and (optional) tape storage for long term big data requirements for z OS applications.
VSM6 emulates up to 256 virtual IBM tape transports but actually moves data to and from VSM Virtual Tape Storage Subsystem (VTSS) disk storage and backend real tape transports housed in automated tape libraries. As VSM data ages, it can be migrated out to physical tape such as a StorageTek SL8500 Modular [Tape] Library system that is attached behind the VSM6 VTSS or system controller.
VSM6 offers a number of replication solutions for DR to keep data in multiple sites in synch and to copy data to offsite locations. In addition, real tape channel extension can be used to extend the VSM storage to span onsite and offsite repositories.
One can cluster together up to 256 VSM VTSSs into a tapeplex which is then managed under one pane of glass as a single large data repository using HSC software.
What’s new with VSM6?
The new VSM6 hardware increases volatile cache to 128GB from 32GB (in VSM5). Non-volatile cache goes up as well, now supporting up to ~440MB, up from 256MB in the previous version. Power, cooling and weight all seem to have also gone up (the wrong direction??) vis a vis VSM5.
The new VSM6 removes the ESCON option of previous generations and moves to 8 FICON and 8 GbE Virtual Library Extension (VLE) links. FICON channels are used for both host access (frontend) and real tape drive access (backend). VLE was introduced in VSM5 and offers a ZFS based commodity disk tier behind the VSM VTSS for storing data that requires longer residency on disk. Also, VSM supports a tapeless or disk-only solution for high performance requirements.
System capacity moves from 90TB (gosh that was a while ago) to now support up to 1.2PB of data. I believe much of this comes from supporting the new T10,000C tape cartridge and drive (5TB uncompressed). With the ability of VSM to cluster more VSM systems to the tapeplex, system capacity can now reach over 300PB.
Somewhere along the way VSM started supporting triple redundancy for the VTSS disk storage which provides better availability than RAID6. Not sure why they thought this was important but it does deal with increasing disk failures.
Oracle stated that VSM6 supports up to 1.5GB/Sec of throughput. Presumably this is landing data on disk or transferring the data to backend tape but not both. There doesn’t appear to be any standard benchmarking for these sorts of systems so, will take their word for it.
Why would anyone want one?
Well it turns out plenty of mainframe systems use tape for a number of things such as data backup, HSM, and big data batch applications. Once you get past the sunk costs for tape transports, automation, cartridges and VSMs, VSM storage can be a pretty competitive data storage solution for the mainframe environment.
The fact that most mainframe environments grew up with tape and have long ago invested in transports, automation and new cartridges probably makes VSM6 an even better buy. But tape is also making a comeback in open systems with LTO-5 and now LTO-6 coming out and with Oracle’s 5TB T10000C cartridge and IBM’s 4TB 3592 JC cartridge.
Not to mention Linear Tape File System (LTFS) as a new tape format that provides a file system for tape data which has brought renewed interest in all sorts of tape storage applications.
Competition not standing still
EMC introduced their Disk Library for Mainframe 6000 (DLm6000) product that supports two different backends to deal with the diversity of tape use in the mainframe environment. Moreover, IBM has continuously enhanced their Virtual Tape Server the TS7700 but I would have to say it doesn’t come close to these capacities.
Lately, when I talked with long time StorageTek tape mainframe customers they have all said the same thing. When is VSM6 coming out and when will Oracle get their act in gear and start supporting us again. Hopefully this signals a new emphasis on this market. Although who is losing and who is winning in the mainframe tape market is the subject of much debate, there is no doubt that the lack of any update to VSM has hurt Oracle StorageTek tape business.
Something tells me that Oracle may have fixed this problem. We hope that we start to see some more timely VSM enhancements in the future, for their sake and especially for their customers.
Attended SNWUSA this week in San Jose, It’s hard to see the show gradually change when you attend each one but it does seem that the end-user content and attendance is increasing proportionally. This should bode well for future SNWs. Although, there was always a good number of end users at the show but the bulk of the attendees in the past were from storage vendors.
Another large storage vendor dropped their sponsorship. HDS no longer sponsors the show and the last large vendor still standing at the show is HP. Some of this is cyclical, perhaps the large vendors will come back for the spring show, next year in Orlando, Fl. But EMC, NetApp and IBM seemed to have pretty much dropped sponsorship for the last couple of shows at least.
SSD startup of the show
Skyhawk hardware (c) 2012 Skyera, all rights reserved (from their website)
The best, new SSD startup had to be Skyera. A 48TB raw flash dual controller system supporting iSCSI block protocol and using real commercial grade MLC. The team at Skyera seem to be all ex-SandForce executives and technical people.
Skyera’s team have designed a 1U box called the Skyhawk, with a phalanx of NAND chips, there own controller(s) and other logic as well. They support software compression and deduplication as well as a special designed RAID logic that claims to reduce extraneous write’s to something just over 1 for RAID 6, dual drive failure equivalent protection.
Skyera’s underlying belief is that just as consumer HDAs took over from the big monster 14″ and 11″ disk drives in the 90’s sooner or later commercial NAND will take over from eMLC and SLC. And if one elects to stay with the eMLC and SLC technology you are destined to be one to two technology nodes behind. That is, commercial MLC (in USB sticks, SD cards etc) is currently manufactured with 19nm technology. The EMLC and SLC NAND technology is back at 24 or 25nm technology. But 80-90% of the NAND market is being driven by commercial MLC NAND. Skyera came out this past August.
Coming in second place was Arkologic an all flash NAS box using SSD drives from multiple vendors. In their case a fully populated rack holds about 192TB (raw?) with an active-passive controller configuration. The main concern I have with this product is that all their metadata is held in UPS backed DRAM (??) and they have up to 128GB of DRAM in the controller.
Arkologic’s main differentiation is supporting QOS on a file system basis and having some connection with a NIC vendor that can provide end to end QOS. The other thing they have is a new RAID-AS which is special designed for Flash.
I just hope their USP is pretty hefty and they don’t sell it someplace where power is very flaky, because when that UPS gives out, kiss your data goodbye as your metadata is held nowhere else – at least that’s what they told me.
Cloud storage startup of the show
There was more cloud stuff going on at the show. Talked to at least three or four cloud gateway providers. But the cloud startup of the show had to be Egnyte. They supply storage services that span cloud storage and on premises storage with an in band or out-of-band solution and provide file synchronization services for file sharing across multiple locations. They have some hooks into NetApp and other major storage vendor products that allows them to be out-of-band for these environments but would need to be inband for other storage systems. Seems an interesting solution that if succesful may help accelerate the adoption of cloud storage in the enterprise, as it makes transparent whether storage you access is local or in the cloud. How they deal with the response time differences is another question.
Different idea startup of the show
The new technology showplace had a bunch of vendors some I had never heard of before but one that caught my eye was Actifio. They were at VMworld but I never got time to stop by. They seem to be taking another shot at storage virtualization. Only in this case rather than focusing on non-disruptive file migration they are taking on the task of doing a better job of point in time copies for iSCSI and FC attached storage.
I assume they are in the middle of the data path in order to do this and they seem to be using copy-on-write technology for point-in-time snapshots. Not sure where this fits, but I suspect SME and maybe up to mid-range.
Most enterprise vendors have solved these problems a long time ago but at the low end, it’s a little more variable. I wish them luck but although most customers use snapshots if their storage has it, those that don’t, seem unable to understand what they are missing. And then there’s the matter of being in the data path?!
~~~~
If there was a hybrid startup at the show I must have missed them. Did talk with Nimble Storage and they seem to be firing on all cylinders. Maybe someday we can do a deep dive on their technology. Tintri was there as well in the new technology showcase and we talked with them earlier this year at Storage Tech Field Day.
The big news at the show was Microsoft purchasing StorSimple a cloud storage gateway/cache. Apparently StorSimple did a majority of their business with Microsoft’s Azure cloud storage and it seemed to make sense to everyone.
The SNIA suite was hopping as usual and the venue seemed to work well. Although I would say the exhibit floor and lab area was a bit to big. But everything else seemed to work out fine.
On Wednesday, the CIO from Dish talked about what it took to completely transform their IT environment from a management and leadership perspective. Seemed like an awful big risk but they were able to pull it off.
All in all, SNW is still a great show to learn about storage technology at least from an end-user perspective. I just wish some more large vendors would return once again, but alas that seems to be a dream for now.
The above chart was from our July newsletter Exchange Solution Reviewed Program (ESRP) performance analysis for 1000 and under mailbox submissions. I have always liked response times as they seem to be mostly the result of tight engineering, coding and/or system architecture. Exchange response times represent a composite of how long it takes to do a database transaction (whether read, write or log write). Latencies are measured at the application (Jetstress) level.
On the chart we show the top 10 data base read response times for this class of storage. We assume that DB reads are a bit more important than writes or log activity but they are all probably important. As such, we also show the response times for DB writes and log writes but the ranking is based on DB reads alone.
In the chart above, I am struck by the variability in write and log write performance. Writes range anywhere from ~8.6 down to almost 1 msec. The extreme variability here begs a bunch of questions. My guess is the wide variability probably signals something about caching, whether it’s cache size, cache sophistication or drive destage effectiveness is hard to say.
Why EMC seems to dominate DB read latency in this class of storage is also interesting. EMC’s Celerra NX4, VNXe3100, CLARiiON CX4-120, CLARiiON AX4-5i, Iomega ix12-300 and VNXe3300 placed in the top 6 slots, respectively. They all had a handful of disks (4 to 8), mostly 600GB or larger and used iSCSI to access the storage. It’s possible that EMC has a great iSCSI stack, better NICs or just better IO scheduling. In any case, they have done well here at least with read database latencies. However, their write and log latency was not nearly as good.
We like ESRP because it simulates a real application that’s pervasive in the enterprise today, i.e., email. As such, it’s less subject to gaming, and typically shows a truer picture of multi-faceted storage performance.
~~~~
The complete ESRP performance report with more top 10 charts went out in SCI’s July newsletter. But a copy of the report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the ESRP performance analysis now and subscribe to future free newsletters by just using the signup form above right.
For a more extensive discussion of current SAN block system storage performance covering SPC (Top 30) results as well as ESRP results with our new ChampionsChart™ for SAN storage systems, please see SCI’s SAN Storage Buying Guide available from our website.
As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.
(SCIESRP120429-001) 2012 (c) Silverton Consulting, All Rights Reserved
The above chart is from our April newsletter on Microsoft Exchange 2010 Solution Reviewed Program (ESRP) results for the 1,000 (actually 1001) to 5,000 mailbox category. We have taken the database transfers per second, normalized them for the number of disk spindles used in the run and plotted the top 10 in the chart above.
A couple of caveats first, we chart disk-only systems in this and similar charts on disk spindle performance. Although, it probably doesn’t matter as much at this mid-range level, for other categories SSD or Flash Cache can be used to support much higher performance on a per spindle performance measure like the above. As such, submissions with SSDs or flash cache are strictly eliminated from these spindle level performance analysis.
Another caveat, specific to this chart is that ESRP database transaction rates are somewhat driven by Jetstress parameters (specifically simulated IO rate) used during the run. For this mid-level category, this parameter can range from a low of 0.10 to a high of 0.60 simulated IO operations per second with a median of ~0.19. But what I find very interesting is that in the plot above we have both the lowest rate (0.10 in #6, Dell PowerEdge R510 1.5Kmbox) and the highest (0.60 for #9, HP P2000 G3 10GbE iSCSI MSA 3.2Kmbx). So that doesn’t seem to matter much on this per spindle metric.
That being said, I always find it interesting that the database transactions per second per disk spindle varies so widely in ESRP results. To me this says that storage subsystem technology, firmware and other characteristics can still make a significant difference in storage performance, at least in Exchange 2010 solutions.
Often we see spindle count and storage performance as highly correlated. This is definitely not the fact for mid-range ESRP (although that’s a different chart than the one above).
Next, we see disk speed (RPM) can have a high impact on storage performance especially for OLTP type workloads that look somewhat like Exchange. However, in the above chart the middle 4 and last one (#4-7 & 10) used 10Krpm (#4,5) or slower disks. It’s clear that disk speed doesn’t seem to impact Exchange database transactions per second per spindle either.
Thus, I am left with my original thesis that storage subsystem design and functionality can make a big difference in storage performance, especially for ESRP in this mid-level category. The range in the top 10 contenders spanning from ~35 (Dell PowerEdge R510) to ~110 (Dell EqualLogic PS Server) speaks volumes on this issue or a multiple of over 3X from top to bottom performance on this measure. In fact, the overall range (not shown in the chart above spans from ~3 to ~110 which is a factor of almost 37 times from worst to best performer.
Comments?
~~~~
The full ESRP 1K-5Kmailbox performance report went out in SCI’s April newsletter. But a copy of the full report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the full SPC performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.
For a more extensive discussion of current SAN or block storage performance covering SPC-1 (top 30), SPC-2 (top 30) and all three levels of ESRP (top 20) results please see SCI’s SAN Storage Buying Guide available on our website.
As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.
At the second day of Dell Storage Forum in Boston, they announced:
New FluidFS (Exanet) FS8600 front end NAS gateway for Dell Compellent storage. The new gateway can be scaled from 1 to 4 dual controller configurations and can support a single file system/name space of up to 1PB in size. The FS8600 is available with 1GbE or 10GbE options and support 8Gbps FC attachments to backend storage.
New Dell Compellent SC8000 controllers based on Dell’s 2U, 12th generation server hardware that can now be cooled with ambient air (115F?) and consumes lower power than previous Series 40 whitebox server controllers. Also the new hardware comes with dual 6-core processors and support 16 to 64GB of DRAM per controller or up to 128GB with dual controllers. The new controllers GA this month, support PCIe slots for backend 6Gbps SAS and frontend connectivity of 1GbE or 10GbE iSCSI, 10GbE FCoE or 8Gbps FC, with 16Gbps FC coming out in 2H2012.
New Dell Compellent SC200 and SC220 drive enclosures a 2U 24 SFF drive enclosure or a 2U 12LFF drive enclosure configuration supporting 6Gbps SAS connectivity.
New Dell Compellent SC6.0 operating software supporting a 64 bit O/S for larger memory, dual/multi-core processing.
New FluidFS FS7600 (1GbE)/FS7610 (10GbE) 12th generation server front end NAS gateways for Dell EqualLogic storage which supports asynchronous replication at the virtual file system level. The new gateways also support 10GbE iSCSI and can be scaled up to 507TB in a single name space.
New FluidFS NX3600 (1GbE) /NX3610 (10GbE) 12th generation server front end NAS gateways for PowerVault storage systems which can support up to 576TB of raw capacity for a single gateway or scale to two gateways for up to 1PB of raw storage in a single namespace/file system.
Appasure 5 which includes better performance based on a new backend object store to protect even larger datasets. At the moment Appasure is a Windows only solution but with block deduplication/compression and change block tracking is already WAN optimized. Dell announced Linux support will be available later this year.
Probably more interesting was talk and demoing a prototype of their RNA Networks acquisition which supports a cache coherent PCIe SSD cards in Dell servers. The new capability is still on the drawing boards but is intended to connect to Dell Compellent storage and move tier 1 out to the server. Lot’s more to come on this. They call this Project Hermes for the Greek messenger god. Not sure but something about having lightening bolts on his shoes comes to mind…
Coraid EtherDrive mounted in rack by redjar (cc) (From flickr)
Held a Storage Field Day briefing yesterday with Coraid, the creators of EtherDrive an all Ethernet SAN storage system.
The advantages of EtherDrive are significant. Not the least of which is that it is very cheap storage. It scales independently as each storage server/node is a separate storage system. Also it’s very easy to set up as each storage drive has it’s own MAC address.
I suppose the downside is that it uses an internal storage access protocol to supply access across the Ethernet. This requires a special host device driver and they have to modify the firmware of a standard Intel NIC to support their internal protocol. After all this is in place their storage LUNs appear as parallel SCSI like service to Linux and Windows hosts, but is actually using the Ethernet protocol at a low level to support attache their shared storage to the hosts. They call the protocol ATA over Ethernet but they could have just as easily called it the Coraid storage protocol.
It is a connectionless protocol which uses Ethernet Layer 2 switching to supply a datagram storage service. Data is packaged into 64KB blocks and then broken up into jumbo frames and sent to the MAC address for the storage drive which somehow maps to a storage server and disk LUN. Data is RAID protected within a storage server.
The advantage of the connectionless protocol is that it is very robust in the face of errors and it can take advantage of any number of parallel paths that are available between the storage and the servers that are using it. (As an aside, another session in Storage Field Day was at Brocade and we got to see some of their SAN fabric switching gear – subject for another post someday).
They showed one animation which had an iSCSI with MPIO transfer a 64Kblock but it ended up only using one of the MPIO paths because the other was only used in failure scenarios. They then showed their approach and it used all the paths that were between the server and the storage.
Apparently, in native mode (whatever that is), LUNs are limited in size to something less than a disk device but they seem to have a LVM/virtualization server that is somehow both out of band that provides multi-disk LUNs, replication, snapshot and other advanced services. We didn’t talk about this capability much.
Coraid said they have 1500 customers, plenty with more than a PB. In fact they handed out tokens which provided us honorary memberships in the PB club. They mentioned FORD, NASA, SONY and a bunch of other well known G100 companies around the world which has lot’s of Coraid. Also it appears there is quite a lot of Coraid in DOD installations around the world. In fact, one of the Execs at the session was a former IT exec for the Marines who liked it so much he now works for Coraid.
This was the second to last stop of the day so by this time the Storage Field Day team was somewhat dragging but we managed to ask a bunch of pertinent questions. If you want to see what it looks like I suggest you watch the video (which can be seen here, I’m the handsome guy in the brown sports coat close to the front of the room).
Don’t know why I have never heard of them before but they are unlike anything else I am aware of in the storage industry. They certainly are a block storage system but seem to have taken DAS and somehow put it out as shared storage, at the other end of an ethernet plug…
In my opinion, I would say Coraid has an awareness gap. Although with the customer sizes they were mentioning, it seems that word of mouth is somehow working ok for them. Maybe a more aggressive sales/marketing team could take it to the next level. But I don’t know if they want it that much and if they did it might bring another level of competition to their market.
~~~~
Anybody out there a current EtherDrive user? If so what do you think about their storage??
Read two articles recently about how OpenFlow‘s Software Defined Networking is going to take over the networking world, just like VMware and it’s brethern have taken over the server world.
Essentially, OpenFlow is a network protocol that separates the control management of a networking switch or router (control plane) from it’s data path activities (data plane). For most current switches, control management consists of vendor supplied, special purpose software which differs for each and every vendor and sometimes even varies across vendor product lines.
In contrast, data path activities are fairly similar for most of today’s switches and is generally implemented in custom hardware so as to be lightening fast.
However, the main problem with today’s routers and switches is that there is no standard way to talk or even modify the control management software to modify it’s data plane activities.
OpenFlow to the rescue
OpenFlow changes all that. First it specifies a protocol or interface between a switches control plane and it’s data plane. This allows that control plane to run on any server and still provide management for a router or switch data path activities. By doing this OpenFlow provides Software Defined Networking (SDN).
Once OpenFlow switches and control software are in place, the SDN can better control and manage networking activity to optimize for performance, utilization or any other number of parameters.
Products are starting to come out which support OpenFlow protocols. For example, a new OpenFlow compatible ethernet switch is available from IBM (their RackSwitch G8264 & G8264T) and HP has recently released OpenFlow software for their ethernet switches (see OpenFlow blog post). At least some in the industry are starting to see the light.
Google implements OpenFlow
The surprising thing is that one article I read recently is about Google running an OpenFlow network on it’s data center backbone (see Wired’s Google goes with the Flow article). In the article it discusses how a top Google scientist talked about how they implemented OpenFlow for their internal network architecture at the Open Networking Summit yesterday.
Google’s internal network connects it’s multiple data centers together to provide Google Apps and other web services. Apparently, Google has been secretly creating/buying OpenFlow networking equipment and creating it’s own OpenFlow software. This new SDN they have constructed has given them the ability to change their internal network backbone in minutes which would have taken days, weeks or even months before. Also, OpenFlow has given Google the ability to simulate network changes ahead of time allowing them to see what potential changes will do for them.
One key metric is that Google now runs their backbone network close to 100% utilized at all times whereas before they worked hard to get it to 30-40% utilization.
Nicira revolutionizes networking
The other article I read was about a startup called Nicira out of Palo Alto, CA which is taking OpenFlow to the next level by defining a Network Virtual Platform (NVP) and Open vSwitches (OVS).
A NVP is a network virtualization platform controller which consists of cluster of x86 servers running the network virtualization control software providing a RESTful web services API and defines/manages virtual networks.
An OVS is an Open vSwitch software designed for remote control that either runs as a complete software only service in various hypervisors or as gateway software connecting VLANs running on proprietary vendor hardware to the SDN.
OVS gateway services can be used with current generation switches/routers or be used with high performing, simple L3 switches specifically designed for OpenFlow management.
Nonetheless, with NVP and OVS deployed over your networking hardware it removes many of the limitations inherent in current networking services. For example, Nicira network virtualization, allows the movement of application workloads across subnets while maintaining L2 adjacency, scalable multi-tenant isolation and the ability to repurpose physical infrastrucuture on demand.
By virtualizing the network, the network switching/router hardware becomes a pool of IP-switching services, available to be repurposed and/or reprogrammed at a moments notice. Not unlike what VMware did with servers through virtualization.
Customers for Nicira include eBay, RackSpace and AT&T to name just a few. It seems that networking virtualization is especially valuable to big web services and cloud services companies.
~~~~
Virtualization takes on another industry, this time networking and changes it forever.
We really need something like OpenFlow for storage. Taking storage administration out of the vendor hands and placing it elsewhere. Defining an open storage management protocol that all storage vendors would honor.
The main problem with storage virtualization today is it’s kind of like VLANs, all vendor specific. Without, something like a standard protocol, that proscribes a storage management plane’s capabilities and a storage data plane’s capabilities we can not really have storage virtualization.