Latest SPC-1 results – IOPS vs drive counts – chart-of-the-month

Scatter plot of SPC-1  IOPS against Spindle count, with linear regression line showing Y=186.18X + 10227 with R**2=0.96064
(SCISPC111122-004) (c) 2011 Silverton Consulting, All Rights Reserved

[As promised, I am trying to get up-to-date on my performance charts from our monthly newsletters. This one brings us current up through November.]

The above chart plots Storage Performance Council SPC-1 IOPS against spindle count.  On this chart, we have eliminated any SSD systems, systems with drives smaller than 140 GB and any systems with multiple drive sizes.

Alas, the regression coefficient (R**2) of 0.96 tells us that SPC-1 IOPS performance is mainly driven by drive count.  But what’s more interesting here is that as drive counts get higher than say 1000, the variance surrounding the linear regression line widens – implying that system sophistication starts to matter more.

Processing power matters

For instance, if you look at the three systems centered around 2000 drives, they are (from lowest to highest IOPS) 4-node IBM SVC 5.1, 6-node IBM SVC 5.1 and an 8-node HP 3PAR V800 storage system.  This tells us that the more processing (nodes) you throw at an IOPS workload given similar spindle counts, the more efficient it can be.

System sophistication can matter too

The other interesting facet on this chart comes from examining the three systems centered around 250K IOPS that span from ~1150 to ~1500 drives.

  • The 1156 drive system is the latest HDS VSP 8-VSD (virtual storage directors, or processing nodes) running with dynamically (thinly) provisioned volumes – which is the first and only SPC-1 submission using thin provisioning.
  • The 1280 drive system is a (now HP) 3PAR T800 8-node system.
  • The 1536 drive system is an IBM SVC 4.3 8-node storage system.

One would think that thin provisioning would degrade storage performance and maybe it did but without a non-dynamically provisioned HDS VSP benchmark to compare against, it’s hard to tell.  However, the fact that the HDS-VSP performed as well as the other systems did with much lower drive counts seems to tell us that thin provisioning potentially uses hard drives more efficiently than fat provisioning, the 8-VSD HDS VSP is more effective than an 8-node IBM SVC 4.3 and an 8-node (HP) 3PAR T800 systems, or perhaps some combination of these.

~~~~

The full SPC performance report went out to our newsletter subscriber’s last November.  [The one change to this chart from the full report is the date in the chart’s title was wrong and is fixed here].  A copy of the full report will be up on the dispatches page of our website sometime this month (if all goes well). However, you can get performance information now and subscribe to future newsletters to receive these reports even earlier by just sending us an email or using the signup form above right.

For a more extensive discussion of block or SAN storage performance covering SPC-1&-2 (top 30) and ESRP (top 20) results please consider purchasing our recently updated SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions on how to improve our analysis of SPC results or any of our other storage system performance discussions.

Comments?

18 thoughts on “Latest SPC-1 results – IOPS vs drive counts – chart-of-the-month

  1. Great post on an interesing angle to SPC-1 benchmark results.

    SPC-1 was designed from the outset to "avoid" cache effects by uniformly and randomly distributing the IO workload across all of the storage made available to it – what is called the "ASU" or Application Storage Unit within the SPC-1 spec.

    On this point I wanted to add some color if I may to your statement on the VSP being "more effective" than systems with a lower drive count.

    First, I do work in the HP Storage Division (APJ) in a technology evangalist role, and follow closely SPC-1 submissions etc… Hence my comments below are from that perspective!

    [cont'd]

  2. [cont'd from above]

    If we compare the recent Hitachi VSP result [A00110] with the older (Sep 2008) 3PAR T800 result [A00069] – the stand-out difference is in the efficiency of utilised storage…

    Looking at the "Unused Storage Ratio" of each configuration (how much of the physical storage space was *not* made available to the SPC-1 workload engine) – we have:

    T800 – 14.03% Unused Storage Ratio (ASU size of 77.8TB)
    VSP – 39.42% Unused Storage Ratio (ASU size of 49.5TB)

    I think that the "more effective" you refer to is more likely a factor of the more current HDD technology (VSP: SFF SAS vs. T800: LFF FC) and the big difference in storage efficiency; rather than the thin provisioning potentially using hard drives more efficiently than fat provisioning.

    Cheers, Paul Haverfield
    HP Storage APJ.

  3. …and one more aspect to the Hitachi VSP result…

    The other interesting aspect to the VSP result is why Hitachi sought to implement a relatively complicated host-based RAID-0 striping configuration. Is this saying that the virtualisation and data layout within the VSP thin-provisioning pools is not effective enough at eliminating hot-spots, and to achieve satisfactory performance one must host stripe? Host [wide] striping is overly cumbersome, and I think these days a function best left to the storage array to manage.

    1. Paul,Thanks for all the comments. A couple of points on your series of comments:1) Yes, better or newer disks can help performance, but one reason I eliminate anything less than 140GB disks is that the older the disk, typically the smaller the capacity which provides a performance advantage by having more spindles. This chart tries to eliminate the spindle count advantage as much as I can. Without knowing the actual OEM vendor of the disks used in each SPC-1 benchmark, it would be very hard for me to try to level the playing field with similar disk performance. That being said, the HDS VSP did have SFF drives and as far as I can tell the T800 did not.2) Dynamic provisioning, thin provisioning etc. should have some benefits and some performance costs. The benefits are probably subject for a different post but the performance costs should be evident from a sufficient set of benchmark results. There are plenty of storage systems that offer Thin provisioning but this is the first time I have seen one supply a benchmark result with it active. Kudos to HDS for being the first one to do so. All that being said, theoretically, thin provisioning should provide more data storage over less disks. Given that, I believe that from a performance per disk spindle basis it should perform better than a non-thinly provisioned system. To test this we would need equivalent systems one with thin provisioning active and one without. Alas, we don't have such a comparison available just yet3) Thanks for providing the direct links to the two SPC-1reports.4) As for unused capacity, it's a pretty complex issue and plays out, in the number of “extra” disks being used to support the workload, subsystem cost and $/IOPS. The purpose of the chart is to try to level the playing field, at least with respect to the number of disks, whether the capacity is “used” or “unused” plays no part in this chart.5) I guess I don't see the host based RAID-0 striping configuration unless your talking about the TSC configuration section (~p.68). At best I see this as mapping the VSPs RAID-10 to the Windows host LUNs. While it does appear to be striping the host data across the VSP RAID10 LUNs, it's unclear whether this helps or hurts the performance. Although to be honest I am no Windows configuration expert.Once again, thanks for the thoughtful comments.Ray

    1. Storagebuddhist,Thanks for the comment. I see where the SVC 4.3 used space efficient disks but then it populated all the blocks by initializing them. Not sure how I should consider this. However, it's unlikely that the backend storage (DS4700s) at the time were thin provisioned. So it's sort of a mixture between thinly provisioned at the SVC level and not at the storage subsystem level. Nonetheless, it was a good catch.Ray

      1. If you understand SVC then saying "it's unlikely that the backend storage (DS4700s) at the time were thin provisioned. So it's sort of a mixture between thinly provisioned at the SVC level and not at the storage subsystem level." doesn't really make sense. That's a bit like saying the hard drives on the VSP weren't thin provisioned, only the volumes.

        It's hard to compare the details of the two thin provisioned results – there isn't a lot of info on the VSP's actual use of thin provisioning in the benchmark disclosure that I can see, and the benchmark seems to make heavy use of striping at the Windows HostOS layer with diskpart and dynamic disks, which I suspect most admins would be nervous about using in real life.

        I guess that's the nature of benchmarks. 10 years ago controllers tended to be the bottleneck but I think the industry has long since fixed that. SPC-1 seems designed to show up controller bottlenecks otherwise the drive count is the choke point..With SSDs maybe the controller choke points will return to relevance soon.

        1. Thanks again for your comment.I probably don't understand SVC as well as I should. It is only recently that SVC supported thinly provisioned storage behind it. The space efficient volumes ended up all being initialized before the test was begun. It's unclear to me whether at that time the whole Vdisk space was allocated and written or not. Most of todays thinly provisioned storage wouldn't write out an all zeros block to the storage just faking it if it was ever read. Which means that “real” allocation of storage to a thinly provisioned LUN would wait until actual data was written to it. So I still believe the SVC4.3s use of space efficient volumes and pre-formatting/pre-allocating all the space is not “true” thin provisioning but is more a mixed version of thin and fat provisioning. I guess we will have to disagree on this issueHowever, with SSDs becoming more available they should boost performance considerably. Witness TMS results especially in LRT. But even so, disk based systems have the 7 of the top 10 IOPS results (see my detailed report in my newsletter). However, storage performance is hidden behind the system controllers. In order to tease out controller effectiveness with SSDs we would need some sort of IOPS/SSD level analysis. But with TMS having it's own SSDs, FusionIO or PCIe FLASH based systems showing up all over the place, and mixed SSD-disk drive systems becoming ever more prominant any IOPS/SSD becomes awfully complicated . I am still not convinced that caching makes no difference to SPC-1 results. But that will need to wait for another post.Ray

  4. I had blogged about this in late 2007 with the results then, with the same conclusion – that SPC-1 was effectively a cache-hostile benchmark that counted the number of disk drives. There was much discussion at that time among many vendors (see comments in the blog posts)
    http://dotconnector.typepad.com/dotconnectors_blo

    It does seem to have picked up some discriminatory power at high-drive counts of late. If one looks closely, two distinct trajectories appear in the scatter plot above 500 drives.

    Cheers, K.

    1. K,Thanks for the comment. I believe that SPC-1 is not that cache hostile anymore, or perhaps that subsystems are getting better at understanding it's “randomness”. But that being said there does appear to be some more dispersion over 500 drive counts.Ray

      1. The SPC-1 has not changed – it is still cache-hostile. What I believe is likely happening is that some systems are showing signs of component (front-end, CPU, cache, drive loop,…) saturation before the HDDs get saturated. I'll need to dig deeper to determine what it is as it is architecture dependent.

        Cheers, K.

        1. K,Thanks again for your comments. I guess we will need to disagree here because I believe that sophisticated caching can help subsystems perform better on SPC-1. I suppose it's not to much of a stretch to say that SPC-1 is “caching challenged”. ;)Ray

Comments are closed.