SCI 2014Feb25 SPC-1 & SPC-2 performance report

In $/IOPS, ETERNUS DX200 S3, Fujitsu, Huawei, IOPS, Kamanario K2, Kaminario, LDQ, LFP, LRT, MBPS, OceanStor 18800, P9500, SPC, SPC-1, SPC-2, VODby AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers Storage Performance Council (SPC) results[1]. There have been four new SPC-1 and two new SPC-2 submissions. The new SPC-1 results are from Huawei OceanStor™ 18800, HP XP P9500, Fujitsu DX200 S3 and NetApp dual-node, FAS8040 storage systems and the new SPC-2 results come from an HDS Hitachi Unified Storage (HUS) VM and Kaminario K2 storage systems. We start this report with SPC-1 results.

SPC-1 results

We begin our discussion with the Top 10 IOPS for SPC-1 as shown in Figure 1.

Figure 1 Top 10 SPC-1 IOPSBar chart showing top 10 IO operations per second for SPC-1

As can be seen in Figure 1 the new Huawei (at ~1M IOPS) and new HP XP P9500 (at ~602K IOPS) came in at third and sixth respectively in top IOPS. The Huawei OceanStor 18800 was a hybrid storage system with 38TB of SSD (192-200GB SLC) and ~806TB of disks (1344-600GB, 15KRPM) and the HP XP P9500 storage system had only ~38TB of SSD (24-1.6TB flash modules). It’s a bit surprising to see a hybrid storage system able to achieve third place in IOPS but it had a sizeable amount of flash. The remaining disk-only storage systems left on this top ten IOPS chart are the last 3 places (IBM SVC/Storwize, IBM DS8870 and HP P10000).

Next we turn to a favorite of ours, the top ten least response time (LRT™) results in Figure 2.

Figure 2 Top ten SPC-1 LRT resultsBar chart showing top 10 least response times for SPC-1

The new submissions here are the Fujitsu ETERNUS DX200 S3 and the HP XP P9500 storage. The new DX200 S3 is configured as all flash storage (29-400GB MLC) and received an LRT of 0.27 msec. I had to reduce the scale here as all recent top ten LRT results are under 0.40 msec.

Recall that SPC-1 LRT is measured at 10% IO stress level but looking at the remaining response times for these systems at other IO activity levels, it’s almost flat. That is, response time doesn’t raise that much at maximum IO activity. For example the response time for the DX200 S3 at 100% IO activity (200K IOPS) is 0.63msec, just a little over double what it was at 10% of IO activity (20K IOPS). For disk-only storage the difference between the received response time at 10% and 100% IO levels can be a factor of ~10X.

Next we turn to our measure of SPC-1 price performance the top ten IOPS/$/GB in Figure 3.

Bar chart showing top 10 IO operations per second per GBFigure 3 Top ten IOPS/$/GB

We prefer our measure to the standard SPC-1 reported $/IOPS metric as IOPS/$/GB doesn’t provide higher marks for systems with smaller capacity capable of generating high IOPS performance. The only new showing here is the Huawei OceanStor 18800 with ~99K IOPS/$/GB. This is due to it’s hybrid configuration supplying ~275TB of usable RAID 1 storage capacity for ~$2.8 million that supported just over a million IOPS.

SPC-2 results

We turn now to SPC-2 results and its Top 10 mega-bytes per second (MBPS™).

Figure 4 SPC-2* Top 10 MPBS™ spider chartSpider chart of SPC-2 Top 10 MBPS showing individual VOD, LDQ, & LFP performance

The new Kaminario K2 blew out the top ten MBPS with a score of over 33,450 MBPS. The new HDS HUS VM came in at number nine here with an ~11,275 MBPS. To give some feel for the gap between Kaminario K2 and the rest of this chart, all the other top systems are under 17,500 MBPS.

The K2 had ~1,1TB of DRAM and ~179TB of flash (224-800GB) in their storage system and the only disks to be found was 8-1TB disks in their management node (not sure these are involved in IO operations but it seems a lot just for OS storage). On the other hand none of the other top ten systems on this chart had any flash storage whatsoever. The new HDS HUS VM had 384-600GB disk drives in their configuration.

Remember that MBPS is a composite (average) of three workloads: the Large Database Query (LDQ); Large File Processing (LFP); and Video On Demand (VOD) data transfer speeds. The K2 did very well in LDQ activity, topping out at over 37,000 MBPS on this workload.

Significance

It’s great to see new SPC-1 and new SPC-2 results that impact our top 10 rankings.

Contrary to the lengthy discussion in our last SPC report, we now have a clearly superior flash system winning the top spot in SPC-2 MBPS. I was beginning to think that all-flash systems would not do as well for sequential throughput as disk only systems could, but I now stand corrected by Kaminario K2’s SPC-2 results.

Flash only systems continue to dominate SPC-1 LRT results and together with hybrid SSD-disk storage are having a significant impact on SPC-1 IOPS results as well. The impact of flash on OLTP and now sequential throughput workloads is becoming crystal clear.

As always, suggestions on how to improve any of our performance analyses are welcomed. Additionally, if you are interested in more block performance details, we now provide a fuller version (Top 30 results) of some of these charts and a set of new OLTP, Throughput and Email ChampionsCharts™ for enterprise, mid-range and SMB SAN storage in our recently updated (May 2019), SAN Storage Buying Guide available from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in February of 2014.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

[1] All SPC results available from http://www.storageperformance.org/home/ as of 25Feb2014

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.