This Storage Intelligence (StorInt™) dispatch covers SPEC sfs2014 benchmarkresults. There have been two new independent vendor SPEC sfs2014 submissions and a number of SPEC SFS reference submissions since our last report. The two vendor submissions were both for SWBUILD and included E8 Storage D4 with (IBM) Spectrum Scale 5.0 and another submission from WekaIO, this time WekaIO 3.1 using Supermicro Big Twin servers (4 nodes/2u).
The SPEC SFS subcommittee reference submissions used all SSD storage rather than their earlier submission, hybrid storage. We will report on these reference submissions as we cover the individual workloads.
SPEC SFS2014_swbuild (software build) results
SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD concurrent build counts.
In Figure 1, the new #1is the WekaIO 3.1 using Supermicro Big Twin servers above with 1200 concurrent builds. The WekaIO-Supermicro submission was a 16-node cluster, with each node using 4 U.2 1.2TB NVMe SSDs (64 in total). Networking for the WekaIO-Supermicro solution had 50GbE from clients to switch and 100GbE from switch to storage nodes.
The new E8 Storage-Spectrum Scale system came in 2ndplace with 600 concurrent software builds. The E8 Storage used a dual controller configuration with 24 1.6TB NVMe SSDs. The IBM Spectrum Scale cluster had 16 (client) nodes accessing this storage. The networking for the E8 Storage-Spectrum Scale solution was all 100GbE.
Finally, the new SPEC SFS Subcommittee (reference) submission achieved 100 concurrent software builds and used 2 Dell PowerEdge R630 servers, running SLES Linux with NFSv3 using 12 1.6TB SAS SSDs as the backing storage. The networking used for the SFS reference solution was 10GbE.
Next, we show the Minimum and Overall Response Time for SWBUILD submissions in Figure 2.
In Figure 2, we can see that the E8 Storage D4 -Spectrum Scale came in at #1 with the lowest ORT of 0.69 msec and is tied for #2 lowest minimum RT at 0.1 msec. E8 storage uses NVMeoF protocols between clients and storage nodes to achieve best in class IO latencies.
Surprisingly, the SPEC SFS reference submission is now #2 in ORT with 0.82 msec and has the 4thlowest minimum RT at 0.3 msec. The WekaIO didn’t do as well here with an ORT of 1.02 msec and a minimum RT of 0.6 msec.
As SWBUILD is metadata intensive activity, RT is an important metric to track. SWBUILD activity spends a lot of time statusing files to determine which to build. RT for these small statusing activities is a critical factor in achieving better build speed. However, it’s not the only factor that matters.
Finally, we show the SWBUILD MB/sec metric as reported by SPEC SFS in Figure 3.
In Figure 3 we can see top (maximum) SWBUILD MB/sec. WekaIO came in at #1 with an 8.9 GB /sec., with the E8 Storage D4-Spectrum Scale system coming in at #2 with 4.8 GB/sec. It’s pretty impressive that the E8 solution was able to do almost half the WekaIO throughput given that E8 only used a dual controller and the WekaIO system used 16 cluster nodes.
There should be a tight correlation between top concurrent builds and the top MB/sec metric. Across the vendor & SPEC SFS reference submission it averages 7.3 MB/sec/build but ranges from 5.7 to 7.9 MB/sec/build. Unclear why there’s so much variation in this number. For instance, the latest SPEC SFS reference solution did an average of 6.5 MB/sec/build, the latest WekaIO submission did 7.9 MB/sec/build and the E8 Storage D4 – Spectrum Scale did 7.4 MB/sec/build.
Our best guess is the SPEC sfs2014 SWBUILD workload randomizes the files to be built for each run and the distribution of those files’ sizes are probably based on some sample builds that they monitored. However, in order to better compare vendor submissions, we feel these file sizes need to be tightly controlled, and that random samples of these files should result very close to the same throughput requirements for all submissions, not the large variance we see today.
We are pleased to see the new submissions especially new storage like the E8 Storage D4 with (IBM) Spectrum Scale but even a new WekaIO submission, with dedicated servers rather than the AWS compute, is great to see.
We are a bit concerned with the lack of a tighter correlation/lower variance on the SWBUILD top MB/Sec and the concurrent builds metric. We would suggest that the SPEC SFS committee investigate this and determine why this is happening. If they want to discuss this further, please feel free to give us a call (our contact information is below).
There were no new VDA vendor submissions (just a new SPEC SFS reference submission). Further, there are only SPEC SFS Subcommittee (reference) solutions for the DATABASE, EDA and VDI workloads. Which means there are no independent vendors who have submitted results for these workloads.
Still trying to determine the best way to report SPEC sfs2014 results. At some point, when there’s enough submissions we plan to show top ten charts like we use for other performance reports. But in the meantime, we may experiment with a few variants of the above charts. Any ideas on other metrics of interest to report, please do let us know.
Furthermore, suggestions on how to improve any of our performance analyses are always welcomed.
[Also we offer more file storage performance information plus our NFS and CIFS/SMB ChampionsCharts™ charts in our recently updated (April 2019) NAS Storage Buying Guide available for purchase on our website.]
[This performance dispatch was originally sent out to our newsletter subscribers in March of 2018. If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.
Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community