SCI’s latest SPC performance report as of August’19

In IOPS, MacroSAN, MS5580G2, SPC, SPC-1by AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers Storage Performance Council (SPC) results[1]. Over the last quarter there have been 1 new SPC-1 v3 submission, the MacroSAN MS5580G2. Two of our usual top 10 SPC-1 charts have changes and are shown below. Once again, there were no new SPC-2 submissions since our last report.

SPC-1 results

We begin our discussion with top 10 SPC-1 IOPS™ performance in Figure 1.

Figure 1 Top 10 SPC-1 IOPS 

The new MacroSAN MS5580G2 submission came in as our new #3 with just over 6M IOPS. The MacroSAN system had 416 SAS connected SSDs (32 480GB, 384 800GB), used 16 controllers (8-dual controllers) and had 6TB (15X384GB) of DRAM. It also used 2-25GbE links as an inter-cluster communications path. 

We discussed the Fujitsu (#1) and Huawei (#2) cluster storage systems in our last report. The Fujitsu storage had 24 controllers (12 dual controllers) and the Huawei had 8 controllers (4 dual controllers). It seems cluster systems are the only way to crack the 5M IOPS (unless of course your using NVMeoF).

In Figure 2 we present SCI’s computed, economic performance metric, the top ten IOPS/($/GB) 

Figure 2 SPC-1 Top 10 SPC-1 IOPS/($/GB)

The new MacroSAN solution came in as our new #2 with ~353K IOPS/($/GB). The MacroSAN MS5580G2 cost $2.4M, had an ASU capacity of 141.2TB, and as noted above achieved over 6M IOPS. 

As we have discussed in prior reports, we are considering using physical capacity rather than ASU capacity for this metric (may have to rename it). The new MacroSAN system had 322TB of physical SSD capacity but only 141TB of ASU capacity. However, it’s unclear whether such a change would alter the rankings above as most systems in SPC1 use RAID 1 or RAID10 for ASU capacitiy, which halves physical capacity to come up with mirrored storage. 

Significance

It’s great to see new SPC-1 submissions especially when they rank in our top ten charts.  It does seem that clustered systems are taking over the top rankings in our IOPS chart. Just like AFAs have taken over the all of the top 10 LRT and top 10 IOPs rankings. 

In the past, for our SPEC SFS performance report, we occasionally report the top 10 NFS ops per cluster node, as a metric. But have never done that for SPC, perhaps it’s time to do that for SPC results as well. 

It’s unfortunate that there haven’t been more SPC-1 submissions or any new SPC-2 submissions. But. IO benchmarks like SPC seem to be going out of favor with storage vendors. Not sure if any similar alternative exists, but if you would like us to analyze other performance results please let us know. 

As always, suggestions on how to improve any of our performance analyses are welcomed.

[This storage announcement dispatch was originally sent out to our newsletter subscribers in August of 2019.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers. Also we have recently updated (May 2019) our SAN Storage Buying Guide so if you are interested in purchasing block primary storage, please checkout our Buying Guide available for sale on our website.]


[1] All SPC results available from http://www.storageperformance.org/home/ as of 25 Aug 2019

[2]http://silvertonconsulting.com/cms1/product/san-storage-buying-guide/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.