SCI 2011Nov22 Latest SPC-1 storage performance results analysis

In 3PAR V800, Data storage, Hitachi VSP, IBM, IOPS, LRT, RamSan-630 SSD, SPC, SPC-1, SUN ZFS Storage 7420c, SVC, Texas Memory Systems by AdministratorLeave a Comment

We return now to one of our favorite block storage benchmarks, the Storage Performance Council SPC results*.  There have been three new SPC-1 submissions, one from HP on their latest 3PAR V800 storage, one from HDS on their Virtual Storage Platform (VSP) and one from Oracle for their Sun ZFS 7420 storage system. There has been no new SPC-2 results since our last SPC report.

SPC-1*results

We start our discussion with the top 10 IOPS performance metric for SPC-1.

Column chart ranking top SPC-1 IOPS per second

(SCISPC111122-001) (c) 2011 Silverton Consulting, All Rights Reserved

Figure 1 Top 10 SPC-1* IO/Sec

Higher is better on the IO/Sec or IOPS™ results.  Here we can see the new HP 3PAR V800 storage with 8-nodes coming in at #1 and the HDS VSP with 8-virtual storage processors showing up as #10.  One thing not evident in this comparison is that the 3PAR system used 1920 drives while the HDS VSP only had 1152. This difference would normally show up better in IOPS/spindle but as none of the new system results broke into that metric’s top 10, it is not shown in this report.

Another thing not readily apparent was that the HDS VSP system LUNs were thinly (dynamically) provisioned.  I believe this is the first SPC-1 report using thinly provisioned volumes and expect that thin provisioning may degrade performance.

Next let’s turn to IOPS/$/GB.

Column chart showing SPC-1 IOPS per Second per Storage Cost per GB

(SCISPC111122-002) (c) 2011 Silverton Consulting, All Rights Reserved

Figure 2 IOPS/$/GB

Higher is better on IOPS/$/GB.  Here we can see all three systems the #1 HP 3PAR V800 storage, the #7 Oracle Sun ZFS Storage 7420c Appliance and the #10 HDS VSP.  I prefer this approach to measuring the cost of storage IO rather than SPC-1’s reported measure of IOPS/$™ as it incorporates the size of backend storage.

One other thing not discernable in the above is that the Oracle Sun ZFS Storage 7420c used SSDs and had a combination of 8-73GB SSDs write flash accelerators, 8-512GB SSD read flash accelerators and 280-300GB SAS drives.  Most of the other systems here only used disk drives, except for the TMS RamSan-630 that used all SSDs.

 

Bubble chart of a scatter plot, bubble size is storage system cost, vertical is IOPS rate and horizontal is Least Response Time (or access latency)

(SCISPC111122-003) (c) 2011 Silverton Consulting, All Rights Reserved

Figure 3 SPC-1 IOPS vs. drive spindles scatter plot

 Next we have a favorite SPC-1 char that shows a scatter plot of IOPS vs. LRT with bubble size equal to system cost ($K).  Here one can see all three new system results, i.e., the HP 3PAR V800 system at the top of the chart, the HDS VSP somewhere in the middle of the pack around 270K IOPS, and the Oracle Sun ZFS just below the 150K IOPS line.

Scatter plot showing IOPS against number of disk spindles with linear regression line and formula

(SCISPC111129-004) (c) 2011 Silverton Consulting, All Rights Reserved

Figure 4 Scatter plot: IOPS vs. number of disk drives

Finally, we include our occasional IOPS vs. drive count scatter plot.  We like to show this to help determine whether subsystem IOPS performance is driven purely by disk count.  Alas, as shown above, a R**2 of 0.96 clearly indicates that SPC-1 IOPS rate is driven by spindle count.

A couple of caveats for this chart, we threw out any result with SSDs or multiple drive capacities so as not to skew the results. Also this only contains results for drives larger than 140GB (SPC-1 results go all the way back to 36GB disks).

I like to think that the wider variance at higher IOPS rates is telling me something.  Consider the column of 3 results centered about 2000 disk drives.  The lowest performing system was a 4-node IBM SVC 5.1 system, the next higher performer was a 6-node IBM SVC 5.1 configuration and the top system was the 8-node HP 3PAR V800 storage system.  It now seems obvious here but at some constant level of disk drives, the more processing power you throw at SPC-1 the more IOPS you can attain.

This may explain the relatively narrow grouping at fewer than 500 drives. Most of these results probably just used a dual controller configuration for their storage systems.

Significance

It’s good to see some new SPC-1 results especially at the high-end, enterprise class level.  With these systems the IOPS rate just continues to climb and there doesn’t appear to be any real physical limit to storage performance.

In addition, at least when it comes to IOPS disk drive only systems seem to be holding their own against hybrid SSD-disk systems and all SSD systems.  However, IO latency or LRT™ (not shown in today’s dispatch) or is where all SSD systems shine, holding the top 5 spots and have yet to be usurped by anything than other SSD systems.

As always, suggestions on how to improve any of our performance analyses are welcomed.  Additionally, if you are interested in more details, we now provide a fuller version of all these charts in SCI’s SAN Storage Briefing which can be purchased separately from our website[1].

[This performance dispatch was originally sent out to our newsletter subscribers in November of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of SAN storage system features and performance please take the time to examine our recently revised (May 2019) SAN Storage Buying Guide available for purchase from our website.]

~~~~

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.



* All results from www.storageperformance.org as of 21Nov2011

Leave a Reply