Storage utilization is broke

Storage virtualization is a great concept but it has some serious limitations. One which comes to mind is the way we measure storage utilization.

Just look at what made VMware so succesful.  As far as I can see it was mainly due to the fact that x86 servers were vastly under utilized sometimes only achieving single digit utilization with normal applications.  This meant you could potentially run 10 or more of these processes on a single server.  Then dual core, quad core, and eight core processor chips starting showing up which just made non-virtualized systems seem even more of a waste.

We need a better way to measure storage utilization in order to show customers one of the ways where storage virtualization can help.

Although the storage industry talks a lot about storage utilization, they really mean capacity utilization.  There is no sense, no measurement, no idea of what performance utilization is of a storage system.

There is one storage startup that is looking at performance utilization, but from the standpoint of partitioning out system performance to different applications, i.e.,  Nexgen, a hybrid SSD-disk storage system, not as a better way to measure system utilization.

Historical problems with storage performance utilization

I think one problem may be that its much harder to measure storage performance utilization.  With a server processor it’s relatively easy to measure idle time. One just needs some place in the O/S to start clocking idle time whenever the server has nothing else to do.

But it’s not so easy in storage systems. Yes there are still plenty of idle loops but  they can be used to wait until a device delivers or accepts some data.  In that case the storage system is not “technically” idle.  On other hand, when a storage system is actually waiting for work, this is “true” idle time.

Potential performance utilization metrics

From a storage performance utilization perspective, I see at least four different metrics:

  • Idle IO time – this is probably closest to what a standard server utilization looks like.  It could be accumulated during intervals when no IO is active on the system. It’s complement, Busy IO time would be accumulated every time IO activity is present in the storage (from the storage server[s] perspective).  The sad fact is that plenty of storage systems measure something akin to Idle IO time, but seldom report on it in any sophisticated manner.
  • Idle IOP time – this could be some theoretical IOPS rate the system could achieve in its present configuration and anytime it was below that level it would accumulate Idle IOP time. It doesn’t have to be 100% of its rated IOPS performance, it could be targeted at 75%, 50% or even 25% of its configuration dependent theoretical maximum. But whenever IOP’s dropped below this rate then the system would start counting idle IOP time.  It’s complement, Busy IOP time, would be counted anytime the system exceeded that targeted IOPs rate.
  • Idle Throughput time – this could be some theoretical data transfer rate the system, in its current configuration, was capable of sustaining and anytime it was less than this rate it would accumulate Idle Throughput time.  Again this doesn’t have to be at the maximum throughput for the storage system but it needs to be some representative sustainable level. It’s counterpart, Busy Throughput time would be accumulated anytime the system reached the targeted throughput level.

Either of the last two measures could be a continuous value rather than some absolute quantity. For example if the targeted IOPS rate was 100K and the system used 50K IOPS for some time interval, then the Idle IOPS time would be the time interval times 50% (50K IOPS achieved/100K targeted IOPS).

To calculate storage performance utilization one would take the Idle IO, IOPS and/or Throughput time over a wall clock interval (say 15 minutes) and average this across multiple time periods.  Storage systems could chart these values to show end-users the periodicity of their activity.

This way on a 15 minute basis we could understand the busy-ness of a storage system.  And if we found that the storage was running at 5% IOPS or Throughput utilization most of the time, then implementing storage virtualization on that storage system would make a lot of sense.

Problems with proposed metrics

One problem with the foregoing is that IOPS and Throughput rates vary tremendously depending on storage system configuration as well as the type of workload that the system is encountering, e.g., 256KB blocks vs 512 byte blocks can have a significant bearing on IOPS rates and throughput rates attainable by any storage system.

There are solutions to these issues but they all require more work in development, testing and performance modeling.   This may argue for the more simpler Idle IO time metric but I prefer the other measures as they provide more accurate and continuous data.

~~~~

I believe metrics such as the above would be a great start to supplying the information that IT staff need to understand how storage virtualization would be beneficial to an organization. 

There are other problems with the current storage virtualization capabilities present today but these must be subjects for future posts.

Image: Biblioteca José Vasconcelos / Vasconcelos Library by * CliNKer *

2 thoughts on “Storage utilization is broke

  1. Ray, great post that highlights the need for strong storage performance measurement tools. From NetApp's perspective, there are many ways to analyze the performance of our storage arrays. The easiest is probably the Performance Advisor built into our AutoSupport feature and available to all users. There are 2 useful graphs in this tool:

    1) System Utilization: this represents the % of usable CPU currently being used on the array. This graph is based on hourly data and considers both CPU utilization and individual domain utilizations, and reports the overall average system utilization for each hourly period.

    2) Disk utilization: indicates how busy the disk subsystem is. Values consistently over 80% indicate potential disk or loop bottlenecks which may be resolved by adding more disks or loops.

    The Performance Advisor also graphs raw throughput by protocol (SAN and NAS) and displays average read and write latencies hour-by-hour. Click the link below for more information and screenshots, the article is a bit dated but still has some good info.
    http://partners.netapp.com/go/techontap/matl/moni

    Larry

    1. Larry,Thanks for your comment and link. I wasn't aware of the Performance Advisor and the System Utilization sounds like one of the metrics I was looking for in a storage system performance.Ray

Comments are closed.