SCI 2011Aug22 Latest SPC-2 performance results analysis

In $/MBPS, Eternus DX80 S2, Fujitsu, IOPS, LDQ, LFP, MBPS, SPC, SPC-1, SPC-2, VOD by AdministratorLeave a Comment

We return now to our favorite block storage benchmark, Storage Performance Council (SPC) results*.  There have been no new SPC-1 submissions since our last report and only one now SPC-2 result, specifically the Fujitsu DX80 S2. Must be something about the summertime in the states.

SPC-2*results

We start our discussion with the top 10 $/MBPS a sort of price performance metric for SPC-2 throughput results.

SCISPC110822-001 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISPC110822-001 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

 

Figure 1 Top 10 SPC-2* $/MBPS

Lower is better on the $/MBPS results.  Here we can see the new Fujitsu DX80 S2 coming in at #7 on this chart. The series 2 version of the DX80 used 10Krpm, 2.5” disk drives for their submission and although it doubled the MBPS of the previous generation DX80 system it didn’t beat it in price performance, measured here as $/MBPS.   But the first generation DX80 subsystem was running 15Krpm disks with a much smaller (read less expensive) system and as such, did slightly better coming in at #5 on this price performance metric.

Next let’s turn to storage performance versus number of disks.

SCISPC110822-002 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISPC110822-002 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

 

 

Figure 2 Scatter plot LDQ MBPS vs. number of drives

This is our first attempt at showing SPC-2 throughput vs. number of drives. As most of you that follow my prior dispatches or (RayOnStorage) blog know, there has been a lot of discussion around the theme that IO benchmark results are mostly determined by number of disk drives used in the submission.  We have started to product performance vs. drive spindles scatter plots to ascertain the truth to this viewpoint.

Above, we show the correlation between number of disk drives and SPC-2’s Large Database Query (LDQ) MBPS achieved, which at an R**2 of ~0.41, does not help their case.  We chose LDQ because it had the best correlation of the workloads used in SPC-2 that includes video on demand and large file processing along with LDQ.

Given what we see here, we would have to conclude that the number of disk spindles does not entirely drive SPC-2 MBPS results.

SPC-1 results

Although there has been no new data in SPC-1 results we thought it worthwhile to show another scatter plot.

SCISPC110822-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISPC110822-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 3 SPC-1 IOPS vs. drive spindles scatter plot

In contrast to SPC-2 results abave, the same type of scatter plot for SPC-1 shows a pretty decent correlation of ~0.93, indicating that the number of disk spindles drives much of the SPC-1 IOPS results seen in the testing.  Why SPC-1 IOPS would be determined by drive counts but SPC-2 would not is worth considering.

One thought is that SPC-1 IOPS might be better distributed across the drives in a system while SPC-2 workloads are not.  This could be an easy explanation to the results seen in the scatter plots shown here.

Significance

It’s good to see some recent SPC-2 submissions but we still would like to see more.  Similarly, SPC-1 could use a few new submissions now that all the recent spate of acquisitions have worked themselves out.  But it is summer, perhaps by the fall SPC-1 activity will pick up.

We always learn something from looking at benchmark results.  The scatter plot analysis of SPC-2 and SPC-1 shows how those two workloads differ at least with respect to the impact of drive counts.  It’s too bad that not all SPC-1 submissions are required to also provide an SPC-2 submission and vice versa.  Although my vendor friends would probably not be happy to do this considering the cost and time involved to do both benchmarks.  In our opinion, just like IOPS and LRT are two distinct, complementary dimensions of storage horsepower, throughput also provides another unique, orthogonal measure of IO performance. But taking all three together can provide a more well rounded assessment of storage system capabilities.

As always, any suggestions on how to improve our performance analyses are welcomed.  Additionally, if you are interested in more details, we now provide a fuller version of all these charts in SCI’s SAN Storage Buying Guide which can be purchased separately from our website[1].

[This performance dispatch was originally sent out to our newsletter subscribers in August of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of SAN storage system features and performance please take the time to examine our SAN Storage Buying Guide available for purchase from our website.]

—–

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community


* All results from www.storageperformance.org as of 22Aug2011

 



Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.