SCI’s latest SPEC sfs2014 performance report as of March’17

In Build Ops/Sec, FlashSystem 900, ORT, SPECsfs, SPECsfs2014, Spectrum Scale, Streams MB/Sec, SW Build workload, VDA workload by Administrator0 Comments

This Storage Intelligence (StorInt™) dispatch only covers SPECsfs2014 benchmark[1] results. There have been two new SPECsfs2014 submissions since our last report, both IBM Spectrum Scale with Cisco UCS and IBM FlashSystem 900 storage, one for the SWBUILD and the other for the VDA workload.

SPEC SFS2014_swbuild (SWBUILD) results

The SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD build ORT (overall response time) and minimum response time.

Figure 1 SPEC SFS2014_swbuild ORT and minimum RT results

Figure 1 shows the minimum and overall response time for the current four submissions. The latest IBM Spectrum Scale submission is all the way to the right. This is the first time we can ever recall seeing a minimum response time on any (block, file, email, HPC, etc.) storage system benchmark submission which showed a 0.0msec response time.  Yes, it’s obviously a rounded metric but this says the average response time for that workload (@10% of maximum) was under 50 msec. Which is impressive, especially since the first three recorded RT all were listed as 0.0msec response time and the workloads ranged from ~12K to 36K SWBUILD ops/sec.

The new IBM solution’s ORT at 1.32msec, came in at #2. Considering the IBM Spectrum Scale solution was doing about 90K SWBUILD ops/sec at the time, seems very impressive. The SPEC SFS reference solution, at #1, with a 0.96msec ORT, was only doing about 9K SWBUILD ops/sec, ~ 1/10th as much as the IBM Spectrum Scale solution.

In Figure 2 we show the SWBUILD ops/sec.

Figure 2 SPEC SFS2014_swbuild minimum and total ops/sec results.

In Figure 2, the new IBM Spectrum Scale solution (again, at the right in Fig 2) achieved a maximum of ~114.7K SWBUILD ops/sec, which came in second behind the Oracle ZFS ZS3-2 AFA which achieved ~116.0K SWBUILD ops/sec.

For the SWBUILD workload source file metadata is examined to see if it needs to be compiled, and then the selected source code files are compiled. Although we don’t show it, the IBM new solution achieved a maximum of 240 builds. At 114.7K SWBUILD ops/sec, this means there’s ~478 SWBUILD ops per SWBUILD.

We are still working on charts for the SPEC sfs2014 metrics. We might prefer to see the solution RT’s somewhere on this chart.

SPEC SFS2014_vda results

Next, we turn to the SPECsfs2014 VDA ORT and Minimum RT in Figure 3. Recall that, whereas SWBUILD is essentially mixed read-write and metadata intensive, SPEC sfs2014 VDA is video data acquisition and is essentially, a sequential write intensive workload.

Figure 3 SPEC sfs2014_VDA ORT and minimum RT results

The ORT for the new IBM Spectrum Scale solution is our new #1 at 2.92msec and it had a minimum RT of 0.8msec. The ORT is calculated as the median response time across all 10 workload iterations and would have been at about ~6.7GB/sec of VDA streams.

In Figure 4 we show the VDA stream MB/sec.

Figure 4 SPEC sfs2014_VDA stream MB/sec results

In Figure 4, the new IBM Spectrum Scale solution achieved a maximum of 7.9GB/sec of stream throughput. For an AFA, executing a write intensive workload and performing at ~7.9GB/sec, tells us they are doing something different than the standard flash storage system. The fact that the IBM Spectrum Scale Deep Flash 150 and Elastic Storage solutions almost matched this level of performance, says the magic is probably at the Spectrum Scale level.


It’s good to see some new SPEC sfs2014 submissions. It’d be great if there were even more for these two workloads discussed above and a few for VDI and DATABASE workloads (where we only have 1 each, the SPEC SFS reference solution).

Still trying to figure out the best way to report SPEC sfs2014 results. At some point, we will settle down to a more top ten like format, which we use for other performance reports. But in the meantime, we may experiment with a few variants of the above charts. Any ideas on what other metrics of interest to report, please do let us know

Any suggestions on how to improve any of our performance analyses are always welcomed.  Additionally, if you are interested in more file performance details (Top 20 SPECsfs2008 results) and our NFSv3 and CIFS/SMB (SPECsfs) ChampionsCharts™, please consider purchasing our recently updated (March 2016) NAS Buying Guide, available on SCI’s website (see QR code below left).

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (June 2017) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in March of 2017.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

[1] All SPEC SFS2014 information is available at  as of 23Mar2017