
Lost in much of the discussions on storage system performance is the need for both throughput and response time measurements.
- By IO throughput I generally mean data transfer speed in megabytes per second (MB/s or MBPS), however another definition of throughput is IO operations per second (IO/s or IOPS). I prefer the MB/s designation for storage system throughput because it’s very complementary with respect to response time whereas IO/s can often be confounded with response time. Nevertheless, both metrics qualify as storage system throughput.
- By IO response time I mean the time it takes a storage system to perform an IO operation from start to finish, usually measured in milleseconds although lately some subsystems have dropped below the 1msec. threshold. (See my last year’s post on SPC LRT results for information on some top response time results).
Benchmark measurements of response time and throughput
Both Standard Performance Evaluation Corporation’s SPECsfs2008 and Storage Performance Council’s SPC-1 provide response time measurements although they measure substantially different quantities. The problem with SPECsfs2008’s measurement of ORT (overall response time) is that it’s calculated as a mean across the whole benchmark run rather than a strict measurement of least response time at low file request rates. I believe any response time metric should measure the minimum response time achievable from a storage system although I can understand SPECsfs2008’s point of view.
On the other hand SPC-1 measurement of LRT (least response time) is just what I would like to see in a response time measurement. SPC-1 provides the time it takes to complete an IO operation at very low request rates.
In regards to throughput, once again SPECsfs2008’s measurement of throughput leaves something to be desired as it’s strictly a measurement of NFS or CIFS operations per second. Of course this includes a number (>40%) of non-data transfer requests as well as data transfers, so confounds any measurement of how much data can be transferred per second. But, from their perspective a file system needs to do more than just read and write data which is why they mix these other requests in with their measurement of NAS throughput.
Storage Performance Council’s SPC-1 reports throughput results as IOPS and provide no direct measure of MB/s unless one looks to their SPC-2 benchmark results. SPC-2 reports on a direct measure of MBPS which is an average of three different data intensive workloads including large file access, video-on-demand and a large database query workload.
Why response time and throughput matter
Historically, we used to say that OLTP (online transaction processing) activity performance was entirely dependent on response time – the better storage system response time, the better your OLTP systems performed. Nowadays it’s a bit more complex, as some of todays database queries can depend as much on sequential database transfers (or throughput) as on individual IO response time. Nonetheless, I feel that there is still a large component of response time critical workloads out there that perform much better with shorter response times.
On the other hand, high throughput has its growing gaggle of adherents as well. When it comes to high sequential data transfer workloads such as data warehouse queries, video or audio editing/download or large file data transfers, throughput as measured by MB/s reigns supreme – higher MB/s can lead to much faster workloads.
The only question that remains is who needs higher throughput as measured by IO/s rather than MB/s. I would contend that mixed workloads which contain components of random as well as sequential IOs and typically smaller data transfers can benefit from high IO/s storage systems. The only confounding matter is that these workloads obviously benefit from better response times as well. That’s why throughput as measured by IO/s is a much more difficult number to understand than any pure MB/s numbers.
—-
Now there is a contingent of performance gurus today that believe that IO response times no longer matter. In fact if one looks at SPC-1 results, it takes some effort to find its LRT measurement. It’s not included in the summary report.
Also, in the post mentioned above there appears to be a definite bifurcation of storage subsystems with respect to response time, i.e., some subsystems are focused on response time while others are not. I would have liked to see some more of the top enterprise storage subsystems represented in the top LRT subsystems but alas, they are missing.

Call me old fashioned but I feel that response time represents a very important and orthogonal performance measure with respect to throughput of any storage subsystem and as such, should be much more widely disseminated than it is today.
For example, there is a substantive difference a fighter jet’s or race car’s top speed vs. their maneuverability. I would compare top speed to storage throughput and its maneuverability to IO response time. Perhaps this doesn’t matter as much for a jet liner or family car but it can matter a lot in the right domain.
Now do you want your storage subsystem to be a jet fighter or a jet liner – you decide.