SCI 26May2015 Latest SPC-1 & SPC-2 performance analysis

In $/IOPS, FAS8080, FAS8080 EX, IOPS, ISE 820 G3, Kamanario K2, LDQ, LFP, LRT, MBPS, OceanStor 5500v3, SC4020, SPC, SPC-1, SPC-2, Uncategorized, VOD, XP7 by AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers Storage Performance Council (SPC) results[1].  There have been four new SPC-1 and one new SPC-2 submissions since our last report in November. The new SPC-1 results are for Dell SC4020, NetApp FAS8080 EX & X-IO ISE 820 G3 all flash arrays, and Huawei OceanStor™ 5500v3 disk-only storage. The new SPC-2 submission was the HP XP7.  [Ed. italicized text updated/added after the report was sent out]

SPC-1 results

We begin our discussion with top ten IOPS™ (IO operation’s per second) results shown in Figure 1.

SCI150526-001Figure 1 Top 10 SPC-1 IOPS results

First, all of our top ten IOPS results are either flash/SSD only or hybrid storage systems. Disk-only solutions just can’t compete in this metric. The new submission from NetApp, the FAS8080 EX all flash array came in at #6 on this chart with ~685K IOPS. The FAS8080 EX result is a Clustered Data ONTAP configuration, using 4-HA pair controller pairs (8-controllers total) and had 1TB of DRAM cache and 384-200GB eMLC SSDs across their clustered controllers. It doesn’t appear that the NetApp FAS8080 EX solution had any additional FlashCache, so the SSDs were the only flash storage in their solution.

The other two all flash submissions didn’t do as well and failed to make the top 10 IOPS, but they are mid-range storage arrays.

Next, we turn to Top 10 $/IOPS, an SPC-1 reported metric, in Figure 2.

 SCI150525-002Figure 2 SPC-1 Top 10 $/IOPS™

In Figure 2, we can see two of the new, mid-range systems coming in at #1 ($0.32 $/IOPS) for the X-IO ISE 820 G3 and #2 ($0.37 $/IOPS) for the Dell SC4020. Recall that $/IOPS is an SPC-1 reported price performance metric. All of our top $/IOPS results were from all flash/SSD storage systems.

Both systems utilized SSDs for their flash storage, the Dell having 6-480GB SSDs and the X-IO system using 20-200GB SSDs. Both systems were configured with a relatively small amount of storage, at ~2TB for the X-IO system and ~1TB for the Dell system and as a result, were relatively inexpensive at ~$82K and ~$42K and for that price, performed relatively well at ~250K IOPS and ~110K IOPS, respectively.

Note most high-end top 10 IOPS systems don’t do well on $/IOPS, it seems there is still a high price for high levels of performance. The lone exceptions are the two Kaminario storage systems.

The last SPC-1 chart will be SCI’s IOPS/$/GB top ten rankings in Figure 3.

Figure 3 SPC-1 Top 10 IOPS/$/GBSCI150526

In Figure 3, we can see the new, disk-only version of the Huawei OceanStor 5500v3 came in at 5th place. The Huawei OceanStor 5500v3 solution had 360-600GB disk drives as its only storage.

Hybrid and disk-only storage systems still do well on SCI’s IOPs/$/GB, price performance chart as they can still supply relatively good performance for the cost of capacity. Indeed, five of these systems are disk only (HP P10000 3PAR V800, NEC Storage M700, Huawei OceanStor 5500v3, S6800T & S8100 storage) and two were hybrid (Huawei 6800v3 & 18800) with the remainder being all flash/ssd (Kaminario K2, HDS G1000 and IBM FlashSystem 840). There’s a decent mix of mid-range and high-end storage here as well.

SPC-2 results

We are not sure how we missed the HP XP7 SPC-2 submission in our orginal version of this report but we did – sorry about that. The new HP XP7 (a version of Hitachi’s VSP G1000) did very well in the MPBS, and is our new top system. In Figure 4 we show the SPC-2 top ten MBPS results in spider chart format.

Figure 4 Top 10 SPC-2 MBPS results, spider chartSCI150526-004

The HP XP7 used 768-300 GB disk drives and no flash, in a RAID 1 configuration to blow out the top end. With an MBPS of over 43GB/sec, we had to change the scale on this chart once again. Another item of interest is that the HP XP7 did very well in both VoD (~46GB/sec) and LDQ (~49GB/sec), two very different workloads and did just fine in the LFP (~36GB/sec). We would think that a little more work on better caching of the LFP workload could have yielded another 5GB/sec for MBPS.  The HP XP7 was configured with 8 MP blades,\ and 1TB of DRAM.

 As for the competition, the Kaminario K2 (all flash) system was the only one that even came close, but  was ~10GB/sec slower in MBPS. Following that, the two Oracle systems reached only ~17GB/sec and ~16GB/sec respectively. So it looks like we have established a whole new tier of throughput with the HP XP7.

Significance

We are starting to see what we consider more mid-range, all-flash/SSD storage systems being benchmarked and introduced to the market these days. The ones discussed here generated some pretty impressive IOPS performance and LRT response times but were not good enough to crack into those top ten charts. On the other hand, they both did well on SPC’s price performance metric ($/IOPS).

It was good to see some new highend SPC-2 results as there hasn’t been much activity here. Once again, we are sorry for not picking up the HP XP7 in the original version of this report. We promise to do better next time. It’s interesting that HDS elected to go after the SPC-1 record with all flash storage and HP elected to go after SPC-2 with an all disk system, with both using essentially the same storage system, the Hitachi VSP G1000. To my knowledge, except for the Kaminario K2, no other system were #1 in both SPC-1 and SPC-2 before.

 The new clustered, NetApp FAS8080 EX, all flash storage generated some impressive IOPS numbers for using off the shelf SSDs. Some storage vendors are starting to engineer their own flash modules (IBM, HDS, Kaminario, and others). We believe as this becomes more commonplace, it will be harder for a pure SSD-only solution to reach top 10 IOPS.

As discussed in our last report, most new all flash arrays depend on data reduction, (e.g., EMC XtremIO, Pure Systems, SolidFire, etc.) and are being left out of SPC benchmarks because there’s no way to universally control for their specific data reduction technologies. Even if you could somehow control data reduction to equalize its advantages across all storage systems, benchmark comparisons would still not be very fair, as there’s less data being transferred across the backend and through cache for systems using data reduction. This leaves a widening gap in SPC coverage and we see no easy way to address it within the current SPC-1 & SPC-2 benchmark framework.

As always, suggestions on how to improve any of our performance analyses are welcomed.

[Also we offer more block storage performance information plus our OLTP, Email and Throughput ChampionsCharts™ charts in our recently updated (February 2019) SAN Storage Buying Guide, or for more information on some select ESRP performance results please see our recently updated (May 2019) SAN-NAS Storage Buying Guide, both of which are available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in May of 2015.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community.

[1] All SPC results available from http://www.storageperformance.org/home/ as of 25May2015

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.