SCI’s latest SPEC sfs2014 performance report as of September’17

In Builds, Cisco, IBM, ORT, SPECsfs, SPECsfs2014, Spectrum Scale, Streams, SW Build workload, UCS S3260, VDA workload, WekaIO Matrix by AdministratorLeave a Comment


This Storage Intelligence (StorInt™) dispatch only covers SPEC sfs2014 benchmark[1] results. There have been two new SPEC sfs2014 submissions since our last report, one in SWBUILD with the WekaIO solution and the other in VDA with Cisco UCS S3260 & Spectrum Scale 4.2.2. As we still don’t have more than 10 solutions for each of these workloads, we continue to plot all submissions in our charts below.

SPEC SFS2014_swbuild (software build) results

SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD concurrent build counts.

Figure 1 SPEC SFS2014_swbuild number of concurrent software builds

Figure 1 shows the number of software concurrent build done during the maximum load on a system. The new number one is the WekaIO solution above. Both the IBM Spectrum Scale with FlashSystem 900 and the WekaIO were AFA, all the rest of the above were either hybrid or disk only systems.

The WekaIO submission was all run in the AWS public cloud and used 60 r3.8xlarge

EC2 instances with enhanced (10GbE) networking. Each EC2 instance had 32 VCPUs, 244GiB of DRAM, 1 10GbE port and 2-320GiB SSDs

The deployment model was as a HCI solution where compute (file system clients-benchmark drivers) and (software defined) storage were running in the same servers.  WekaIO provided 35.2 TiB in a 15+2 RAID configuration across the 60 servers.

This is the first time I have seen a complete benchmark submission run entirely in the public cloud. WekaIO storage is based on the MatrixFS, a software only, file system solution which presents a POSIX file system across a scale out cluster of servers. Using AWS for deployment, WekaIO have easily shown the ability of MatrixFS to scale out without problem. Although 60 servers are not the largest cluster node count we have seen in benchmarks, it’s certainly up there in the top 5 that we recall.

Next, we show the Minimum and Overall Response Time for SWBUILD submissions in Figure 2.

Figure 2 SPEC sfs2014_swbuild Minimum and Overall Response Times

Figure 2 clearly shows the disadvantages of deploying in AWS. Response times of 0.9 msec minimum and 3.06 msec overall are more indicative of a disk only file system, not AFA. Running in AWS, even as a large EC2 instance, is still a shared server environment and the networking, although 10GbE, is also a shared resource. Shared resources while they may not impact overall throughput, will adversely impact response times.

SPEC SFS2014_vda (video data acquisition) results

In Figure 3 we show the maximum number of concurrent VDA streams.

Figure 3 SPEC SFS2014_vda Streams

In Figure 3, the new Cisco UCS S3260 used 10 servers with 256GB of DRAM and 14-8TB 7.2Krpm (140 total) disk drives. Both the IBM Spectrum Scale 4.2.1 solutions (with Deep Flash 140 and FlashSystem 900) were AFA while the SPEC SFS, Oracle ZFS and IBM Spectrum Scale 4.2 Elastic Storage solutions were disk only. The Cisco UCS S3260’s also used 40GbE networking, 1 port per server.  There were some SSDs in the UCS S3260 solution but were only used as boot devices.

As you may recall, the VDA workload appears to be a 10:90 R:W workload and the write streams approximate HighDef video streams (running @36Mb/s). The heavy write workload may provide a slight advantage to disk only solutions. And the fact that Cisco was using 8TB 7200 rpm disk drives says that for sequential write workloads, large disks work as well as smaller disks and some SSDs, at least for overall throughput.

In Figure 4, we show the minimum and overall response time metrics for VDA submissions.

Figure 4 SPEC sfs2014_vda Minimum and Overall Response Time Results

In Figure 4, we can clearly see the downside of using disk only solutions. The Cisco UCS solution running Spectrum Scale has a better minimum and ORT than the Elastic Storage Server (another disk only submission) but much worse than any AFA solution.

However, in a bandwidth intensive workload like VDA, response times don’t matter nearly as much as throughput. So, the fact that the disk only UCS managed to beat all the other AFA or disk only submissions in throughput is probably more critical than its lack of responsiveness.


We are grateful to see a new WekaIO submission with a new file system. Also, using AWS for deployment seemed a smart but potentially a costly approach.

As such, now that AWS has been used once, it seems like every software only storage solution should run on a public cloud for their benchmark submission. Whether that cloud is AWS, Azure, or Google shouldn’t matter, but probably does. It would be very revealing to see the same WekaIO or Spectrum Scale for that matter, submission on Azure and Google. There would no doubt be differences in the compute instance to skew results. And a WekaIO submission using on premises equipment would also help us all to understand the performance of the public cloud infrastructure.

Still trying to determine the best way to report SPEC sfs2014 results. At some point, when there’s enough submissions we plan to show top ten charts like we use for other performance reports. But in the meantime, we may experiment with a few variants of the above charts. Any ideas on other metrics of interest to report, please do let us know

Any suggestions on how to improve any of our performance analyses are always welcomed.  Additionally, if you are interested in more file performance details (Top 20 SPEC sfs2008 results) and our NFSv3 and CIFS/SMB (SPEC sfs) ChampionsCharts™, please consider purchasing our recently updated (September 2017) NAS Buying Guide, available on SCI’s website (see QR code below left).

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (April 2019) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2017.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community 

[1] All SPEC SFS2014 information is available at  as of 25Sep2017

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.