SCI’s latest SPEC sfs2014 performance report as of June’16

In Build Ops/Sec/Drive, Builds, Data storage, DCS3700, Elastic Storage System, ORT, SPECsfs2014, Spectrum Scale, Streams, Streams Ops/Sec/Drive, SW Build workload, VDA workload by Administrator0 Comments

This Storage Intelligence (StorInt™) dispatch only covers SPECsfs2014 benchmark[1] results. SPECsfs2008 has been retired. There have been two non-SPEC reference submissions, both for IBM Spectrum Scale 4.2 with Elastic Storage Server (ESS) GL6, one for Video Data Acquisition (VDA) and the other for Software Build (SWBUILD) workloads.

SPEC SFS2014_swbuild (SWBUILD) results

Recall that SWBUILD simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD workload number of concurrent builds.

scisfs160228-001Figure 1 SPEC SFS2014_swbuild results, number of builds

In Figure 1 we can see the number of concurrent builds metric for the SPEC SFS2014_swbuild workload. The IBM Spectrum Scale used 8 client nodes with 1 ESS-GL6 storage node with 6 DCS3700E storage expansion shelves. The ESS-GL6 data storage consisted of 384 2TB NL SAS 7.2Krpm drives. Spectrum Scale supplies a Spectrum Scale File System. The SPEC SFS reference system was a 4 node storage cluster running Cluster FS providing NFS, using 96 600GB SAS 10Krpm drives. As one can see from Figure 1, the Spectrum scale managed to do ~6X more concurrent builds than the SPEC SFS Subcommittee solution.

In Figure 2 we show the Overall Response Time (ORT) results for the SWBUILD workload.

Figure 2 SPEC SFS2014_swbuild ORT resultsscisfs160628-002

Lower is better in ORT. Figure 2 shows that the ORT for the Spectrum Scale submission was ~26% higher than the SPEC reference system. Unclear why the smaller system had a better ORT but ORT is a measure of average response time across the entire benchmark not just at the peak or lowest workload. As such, we show our Min ORT which represents the average response time at 10% of their respective peak workloads (16 builds for Spectrum Scale and ~3 builds for SPEC reference system). Here we can clearly see that the Spectrum Scale provides a ~6X better response time at 10% load. In fact, at its ORT of 1.21 ms, Spectrum Scale was doing ~96 concurrent builds whereas at SPEC reference submission’s ORT of 0.96 ms, it was doing 10-18 concurrent builds.

In Figure 3 we end our SWBUILD discussion by showing Build Ops/Sec per drive.

scisfs160628-003Figure 3 SPEC SFS2014_swbuild Build Ops/Sec per drive

Figure 3 shows that there’s not as much difference in performance (~1.5X) when we normalize Build Ops/Sec on a per drive basis. Yes, the SPEC reference submission used faster drives (10Krpm vs. 15Krpm), but this was counter balanced with Spectrum scale having more (client) cluster nodes and apparently faster (meta-data) IO paths.

SPEC SFS2014_vda (VDA) results

SPEC SFS2014 VDA simulates video surveillance storage activities, one stream represents the data coming from one video camera. Figure 4 shows peak VDA concurrent stream counts.

Figure 4 SPEC SFS2014_vda Streamsscisfs160628-004

Figure 4 shows the peak number of concurrent video camera streams supported by each system. The Spectrum Scale system did ~107X more streams then the SPEC reference submission.
In Figure 5 we show the VDA ORT results.

scisfs160628-005Figure 5 SPEC SFS2014_vda ORT

Similar to the discussion for SWBUILD above, in Figure 5 we show both the ORT and the minimum ORT. This time the response times for both the minimum and overall results show an improvement under the VDA workload with the Spectrum Scale providing a ~32% ORT improvement and a ~3X Min ORT improvement over the SPEC reference submission.

In Figure 6 we show our VDA MB/Sec per drive metric.

Figure 6 SPEC SFS2014_vda MB/Sec resultsscisfs160628-006

Figure 6 shows that the Spectrum Scale did ~27X more MB/Sec per drive than the SPEC reference submission. In contrast to SWBUILD discussion above, here we can clearly see that faster drives don’t matter as much for data throughput intensive workloads. But the Spectrum Scale system only had 4X more drives than the SPEC reference system, so drive counts don’t seem to be the critical factor.

Significance

Thankfully, we are starting to see some non-SPEC reference submissions for SPEC SFS2014. Sorry for all the charts but it’s not clear which would supply the most insight into the various workloads. Over time we should be able to reduce the number of charts as we learn more.

Both the Spectrum Scale and the SPEC reference submissions are cluster file systems. Drive speed seems to matter more for SWBUILD than the VDA workloads. There’s still a lot more to learn on VDA workloads and there may yet be some more surprises on SWBUILD in future reports.

Any suggestions on how to improve any of our performance analyses are welcomed.  Additionally, if you are interested in more file performance details (Top 20 SPECsfs2008 result.) and our NFSv3 and CIFS/SMB (SPECsfs2008) ChampionsCharts™, please consider purchasing our recently updated (June, 2016) NAS Buying Guide, available on SCI’s website (see QR code below).

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (March 2017) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in June of 2016.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPEC SFS2014 information is available at https://www.spec.org/sfs2014/  as of 28Jun2016