Storage throughput vs. IO response time and why it matters

Fighter Jets at CNE by lifecreation (cc) (from Flickr)
Fighter Jets at CNE by lifecreation (cc) (from Flickr)

Lost in much of the discussions on storage system performance is the need for both throughput and response time measurements.

  • By IO throughput I generally mean data transfer speed in megabytes per second (MB/s or MBPS), however another definition of throughput is IO operations per second (IO/s or IOPS).  I prefer the MB/s designation for storage system throughput because it’s very complementary with respect to response time whereas IO/s can often be confounded with response time.  Nevertheless, both metrics qualify as storage system throughput.
  • By IO response time I mean the time it takes a storage system to perform an IO operation from start to finish, usually measured in milleseconds although lately some subsystems have dropped below the 1msec. threshold.  (See my last year’s post on SPC LRT results for information on some top response time results).

Benchmark measurements of response time and throughput

Both Standard Performance Evaluation Corporation’s SPECsfs2008 and Storage Performance Council’s SPC-1 provide response time measurements although they measure substantially different quantities.  The problem with SPECsfs2008’s measurement of ORT (overall response time) is that it’s calculated as a mean across the whole benchmark run rather than a strict measurement of least response time at low file request rates.  I believe any response time metric should measure the minimum response time achievable from a storage system although I can understand SPECsfs2008’s point of view.

On the other hand SPC-1 measurement of LRT (least response time) is just what I would like to see in a response time measurement.  SPC-1 provides the time it takes to complete an IO operation at very low request rates.

In regards to throughput, once again SPECsfs2008’s measurement of throughput leaves something to be desired as it’s strictly a measurement of NFS or CIFS operations per second.  Of course this includes a number (>40%) of non-data transfer requests as well as data transfers, so confounds any measurement of how much data can be transferred per second.  But, from their perspective a file system needs to do more than just read and write data which is why they mix these other requests in with their measurement of NAS throughput.

Storage Performance Council’s SPC-1 reports throughput results as IOPS and provide no direct measure of MB/s unless one looks to their SPC-2 benchmark results.  SPC-2 reports on a direct measure of MBPS which is an average of three different data intensive workloads including large file access, video-on-demand and a large database query workload.

Why response time and throughput matter

Historically, we used to say that OLTP (online transaction processing) activity performance was entirely dependent on response time – the better storage system response time, the better your OLTP systems performed.  Nowadays it’s a bit more complex, as some of todays database queries can depend as much on sequential database transfers (or throughput) as on individual IO response time.  Nonetheless, I feel that there is still a large component of response time critical workloads out there that perform much better with shorter response times.

On the other hand, high throughput has its growing gaggle of adherents as well.  When it comes to high sequential data transfer workloads such as data warehouse queries, video or audio editing/download or large file data transfers, throughput as measured by MB/s reigns supreme – higher MB/s can lead to much faster workloads.

The only question that remains is who needs higher throughput as measured by IO/s rather than MB/s.  I would contend that mixed workloads which contain components of random as well as sequential IOs and typically smaller data transfers can benefit from high IO/s storage systems.  The only confounding matter is that these workloads obviously benefit from better response times as well.   That’s why throughput as measured by IO/s is a much more difficult number to understand than any pure MB/s numbers.

—-

Now there is a contingent of performance gurus today that believe that IO response times no longer matter.  In fact if one looks at SPC-1 results, it takes some effort to find its LRT measurement.  It’s not included in the summary report.

Also, in the post mentioned above there appears to be a definite bifurcation of storage subsystems with respect to response time, i.e., some subsystems are focused on response time while others are not.  I would have liked to see some more of the top enterprise storage subsystems represented in the top LRT subsystems but alas, they are missing.

1954 French Grand Prix - Those Were The Days by Nigel Smuckatelli (cc) (from Flickr)
1954 French Grand Prix - Those Were The Days by Nigel Smuckatelli (cc) (from Flickr)

Call me old fashioned but I feel that response time represents a very important and orthogonal performance measure with respect to throughput of any storage subsystem and as such, should be much more widely disseminated than it is today.

For example, there is a substantive difference a fighter jet’s or race car’s top speed vs. their maneuverability.  I would compare top speed to storage throughput and its maneuverability to IO response time.  Perhaps this doesn’t matter as much for a jet liner or family car but it can matter a lot in the right domain.

Now do you want your storage subsystem to be a jet fighter or a jet liner – you decide.





Latest SPECsfs2008 results – chart of the month

Top 10 SPEC(R) sfs2008 NFS throughput results as of 25Sep2009
Top 10 SPEC(R) sfs2008 NFS throughput results as of 25Sep2009

The adjacent chart is from our September newsletter and shows the top 10 NFSv3 throughput results from the latest SPEC(R) sfs2008 benchmark runs published as of 25 September 2009.

There have been a number of recent announcements of newer SPECsfs2008 results in the news of late, namely Symantec’s FileStore and Avere Systems releases but these results are not covered here. In this chart, the winner is the NetApp FAS6080 with FCAL disks behind it, clocking in at 120K NFSv3 operations/second. This was accomplished with 324 disk drives using 2-10Gbe links.

PAM comes out

All that’s interesting of course but what is even more interesting is NetApp’s results with their PAM II (Performance Accelerator Module) cards. The number 3, 4 and 5 results were all with the same system (FAS3160) with different configurations of disks and PAM II cards. Specifically,

  • The #3 result had a FAS3160, running 56 FCAL disks with PAM II cards and DRAM cache of 532GBs. The system attained 60.5K NFSv3 operations per second.
  • The #4 result had a FAS3160, running 224 FCAL disks with no PAM II cards but 20GB of DRAM cache. This system attained 60.4K NFSv3 ops/second.
  • The #5 result had a FAS3160, running 96 SATA disks with PAM II cards and DRAM cache of 532GBs. This system also attained 60.4K NFSv3 ops/second.

Similar results can be seen with the FAS3140 systems at #8, 9 and 10. In this case the FAS3140 systems were using PAM I (non-NAND) cards with 41GB of cache for results #9 and 10, while #8 result had no PAM with only 9GB of Cache. The #8 result used 224 FCAL disks, #9 used 112 FCAL disks, and #10 had 112 SATA disks. They were able to achieve 40.1K, 40.1K and 40.0K NFSv3 ops/second respectively.

Don’t know how much PAM II cards cost versus FCAL or SATA disks but there is an obvious trade off here. You can use less FCAL or cheaper SATA disks but attain the same NFSv3 ops/second performance.

As I understand it, the PAM II cards come in 256GB configurations and you can have 1 or 2 cards in a FAS system configuration. PAM cards act as an extension of FAS system cache and all IO workloads can benefit from their performance.

As with all NAND flash, write access is significantly slower than read and NAND chip reliability has to be actively managed through wear leveling and other mechanisms to create a reliable storage environment. We assume either NetApp has implemented the appropriate logic to support reliable NAND storage or has purchased NAND cache with the logic already onboard. In any case, the reliability of NAND is more concerned with write activity than read and by managing the PAM cache to minimize writes, NAND reliability concerns could easily be avoided.

The full report on the latest SPECsfs2008 results will be up on my website later this week but if you want to get this information earlier and receive your own copy of our newsletter – email me at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

Full disclosure: I currently have a contract with NetApp on another facet of their storage but it is not on PAM or NFSv3 performance.