One can only be perplexed by the seemingly overwelming adoption of server virtualization and contrast that with the ho-hum, almost underwelming adoption of storage virtualization. Why is there this significant a difference?
I think the problem is partly due to the lack of an common understanding of storage performance utilization.
Why server virtualization succeeded
One significant driver of server virtualization is the precipitous drop in server utilization that occurred over the last decade when running single applications on a physical server. It was nothing to see real processor utilization of less than 10% and consequently it was easy to envision that executing 5-10 applications on the single server. And what’s more each new generation of server kept getting more powerful, handling double the MIPs every 18 months or so driven by Moore’s law.
The other factor was that application workloads weren’t increasing that much. Yes new applications would come online but they seldom consumed an inordinate amount of MIPs and were often similar to what was already present. So application processing growth while not flatlining, was expanding at a relatively slow speed.
Why storage virtualization has failed
Data on the other hand continues its never ending exponential growth. Doubling every 3-5 years or less. And the fact that you have more data, almost always requires more storage hardware to support the IOPs being required to support it.
In the past the storage IOP rates was intrinsically tied to the number of disk heads available to service the load. Although disk performance grew it wasn’t doubling every 18 months, and real per disk performance was actually going down over time, measured as the amount of IOPS per GB.
This drove proliferation of disk spindles and as such, storage subsystems in the data center. Storage virtualization couldn’t reduce the number of spindles required to support the workload.
Thus, if you look at storage performance from the perspective of % IOPS one could support per disk, most sophisticated systems were running anywhere from 75% to 150% (based on DRAM caching).
Paradigm shift ahead
But SSDs can change this dynamic considerably. A typical SSD can sustain 10-100K IOPs and there is some liklihood that this will increase with each generation that comes out but the application requirements will not increase as fast. Hence, , there is a high liklihood that normal data center utilisation of SSD storage perfomance will start to drop below 50% or more, when that happens. -torage virtualization may start to make a lot more sense.
Maybe when (SSD) data storage starts moving more in line with Moore’s law, storage virtualization will become a more dominant paradigm for data center storage use.
Any bets on who the VMware of storage virtualization will be?