SCI’s latest SPC-1 & SPC-2 performance results analysis as of November’17

This Storage Intelligence (StorInt™) dispatch covers Storage Performance Council (SPC) results[1]. There have been two new SPC1 v3 submissions, the Fujitsu ETERNUS AF250 and the NetApp EF570 AFA and there has been only one new SPC-2 submission, the NetApp EF570 AFA. A few of our SPC-1 and SPC-2 top ten performance charts have changed.

SPC-1 results

We begin our discussion with top ten SPC-1 measured LRT shown in Figure 1.

Figure 1 Top 10 SPC-1 LRT

In Figure 1, we can see the new NetApp EF570 AFA system came in at #4 with 0.12 msec LRT. Recall that LRT is the average response time at 10% load and the EF570 beat the older, NetApp EF560 by ~50 msec.

It’s interesting to note that for the two NetApp EF systems, the older (& slower) EF560 used 16 Gbps FC connections to the client servers while the newer (& faster) EF570 system used 12 Gbps SAS connections to the host. Also, the newer EF570 had 12Gbps SAS connections to the SSDs while the older EF560 used 6Gbps SAS.  Unclear whether it was the slower SAS frontend (host) connections or the faster SAS backend connections which delivered the faster LRT but my bet is on the frontend SAS connections.

In Figure 2 we examine SPC-1 Price-performance™ or $/IOPS as both reported by SPC and calculated by SCI.

Figure 2 SPC-2 Top 10 SPC-1 $/IOPS

In Figure 2, both the new Fujitsu ETERNUS AF250 and NetApp EF570 AFA came in at #2 and #4 respectively. As noted previously all our top 10 $/IOPS are AFA.

The Fujitsu AF250 used a relatively inexpensive solution ($33.9K) with relatively high IOPS (~360K) to come in at $0.09 $/IOPS. Not sure why SPC-1 reported the Fujitsu AF250 Price-Performance as $0.10. Nowadays, a $0.01 difference matters in Top 10 placement for $/IOPS. If I were Fujitsu, I would ask for a revision.

The NetApp EF570 also used a relatively inexpensive solution ($64.2K) and high IOPS (~500K) to come in at $0.13 $/IOPS. But at least here SCI and SPC agree on what the $/IOPS should be.

Figure 3 shows an SCI calculated SPC-1 IOPS™/SSD metric.

Figure 3 SPC-2 Top 10 SPC-1 IOPS/SSD

In Figure 3, the new NetApp EF570 AFA came in at #10 with 20.8K IOPS/SSD across their 24 SSDs. We have used both IOPS/GB-NAND and IOPS/SSD to measure flash effectiveness in the past. Both have their merit. On the IOPS/GB-NAND the NetApp EF570 AFA came in at #20.

The IBM FlashSystems and some of the HDS&HP[E] systems use flash modules rather than standard SSDs, so those are not technically SSDs but we include them here as SSDs for the purposes of this chart.

Why the hybrid (54 SSDs – 10 HDDs), DataCore Parallel Server did so well here is a mystery. We have to assume the HDDs were not used at all during the benchmark run. DataCore claims their advance is through the use of parallel IO processing but we believe all these systems execute parallel IO processing. It’s possible Data Core’s 1.25TB of DRAM cache or the 36 cores made a significant difference. And that the two Data Core Parallel Servers were running under Windows 2012 R2, with this level of performance (5M IOPS), is amazing.

SPC-2 results

We show the an SPC reported SPC-2 Price-Performance or $/MBPS in Figure 4.

Figure 4 Top 10 SPC-2 $/MBPS

In Figure 4, the new NetApp EF570 AFA came in as the new #1 with $3.69 Price-Performance or $/MBPS. The NetApp EF570’s great throughput performance (~17.3GB/sec) and relatively low cost ($63.9K) helped them achieve this #1 level in $/MPBS.

Significance

As reported previously, we combine results for SPC-1 v3 and v1 in one chart. SPC reports these results separately from v1. The differences between the two versions are not that significant from our perspective and are more in the block size used rather than the workloads being generated.

Nowadays we are only seeing new AFAs for SPC-1 results. There are newer hybrid storage systems out there with excellent performance but so far none in SPC-1 submissions. It’s also great to see SPC-2 results.

With this report, we revised the format of all our charts to be more readable. Hopefully, our readers will agree. As always, suggestions on how to improve any of our performance analyses are welcomed.

[Also we offer more block storage performance information plus our OLTP, Email and Throughput ChampionsCharts™ in our recently updated (August 2018) SAN Storage Buying Guide, or for more information on protocol performance results please see our recently updated (October 2018) SAN-NAS Storage Buying Guide, both of which are available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in November of 2017.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

[1] All SPC results available from http://www.storageperformance.org/home/ as of 27Nov2017

[2]http://silvertonconsulting.com/cms1/product/san-storage-buying-guide/

Leave a Reply