SCI’s latest SPEC sfs2014 performance report as of September’16

In Builds, Deep Flash 150, Elastic Storage System, Oracle ZFS ZS3-2, SPECsfs, SPECsfs2014, Spectrum Scale, Streams, SW Build workload, VDA workload by Administrator0 Comments

This Storage Intelligence (StorInt™) dispatch only covers SPECsfs2014 benchmark[1] results. SPECsfs2008 has been retired. There have been three new non-SPEC reference submissions, two for Oracle ZFS ZS3-2 clustered storage systems, one Software Build (SWBUILD) and the other for Video Data Acquisition (VDA) workload, and one for the IBM Spectrum Scale 4.2.1 with DeepFlash 150 just for the VDA workload.

SPEC SFS2014_swbuild (SWBUILD) results

The SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD workload number of concurrent builds.

SCISFS160930-001Figure 1 SPEC SFS2014_swbuild results, number of builds

In Figure 1 we can see the number of concurrent builds metric for the SPEC SFS2014_swbuild workload. Both the IBM Spectrum Scale 4.2 with Elastic Storage Server (ESS) GL6 and the Oracle ZFS ZS3-2 used 4 controllers/2 cluster nodes with disk shelves. However, the ZS3-2 also used SSDs for the ZFS Intent Log (ZIL) in their configuration, there was no flash used in the IBM ESS or SPEC SFS reference solution. On the other hand, the SPEC SFS solution used 2GiB and the IBM ESS used 4GiB of NVRAM while the Oracle ZS3-2 had no NVRAM.

Also, the SPEC SFS used 96-600GB 10Krpm SAS drives, the IBM ESS used 348-2TB NL (7.2Krpm) SAS drives and the Oracle ZS3-2 used 136-300GB 10Krpm SAS drives for their filesystem stable disk storage. Consequently, the total usable stable storage capacities were significantly different as well with 26TiB for the SPEC SFS, 458.5TiB for the IBM ESS and 19.0TiB for the Oracle ZS3-2. For some unknown reason the Oracle ZS3-2 used 240 file systems with a total filesystem used capacity of 17.4TiB which both the SPEC SFS and the IBM ESS used only 1 file system with a total filesystem used capacity of  26TiB and 32TiB respectively. Other benchmarks (see Microsoft ESRP results) report a % of used capacity/total capacity, this would be 100%, 7% and 93.8% for the SPEC SFS, IBM ESS and Oracle ZS3-2, respectively. Why the Oracle system used 240 file systems is an open question but the fact that it equals the Build count is saying something.

In Figure 2 we show the Response Time results for the SWBUILD workload.

Figure 2 SPEC SFS2014_swbuild ORT resultsSCISFS160930-002

Lower is better in Overall Response Time (ORT). Figure 2 shows that the ORT for the IBM ESS submission was ~26% higher than the SPEC reference system while the ORT for the Oracle ZS3-2 was ~78% higher. Unclear why the smaller system had the best ORT, but ORT is a measure of average response time across the entire benchmark not just at the peak or lowest workload. At ORT response time levels, the SPEC SFS solution managed ~10 concurrent builds, the IBM ESS ~96 builds and the Oracle ZS3-2 ~182 builds.

We also show our minimum Average Response Time (Min RT) in Figure 2, which represents the response time at 10% of respective peak workloads (~3 builds for SPEC reference system, 16 builds for IBM ESS and 24 for Oracle ZS3-2). Here we see that the Spectrum Scale provides a ~6X better response time at 10% load than either of the other systems.

In Figure 3 we end our SWBUILD discussion by showing Build MB/Sec.

SCISFS160930-003Figure 3 SPEC SFS2014_swbuild Build MB/Sec

Figure 3 shows that there’s not as much difference in SWBUILD throughput (~8%) when we compare the MB/Sec at peak builds generated by the newer systems.  Unclear why the number of concurrent builds would be ~50% better for the Oracle storage than the IBM ESS (see Figure 1 above) but the build MB/Sec would be so close to one another. According to the SPEC sfs2014 documentation there’s 5GiB per build which should scale based on the number of builds but we are not seeing that in the above.

SPEC SFS2014_vda (VDA) results

SPEC SFS2014 VDA simulates video surveillance storage activities, one stream represents the data coming from one video camera. This workload is predominantly a write workload with modest metadata activity. Figure 4 shows peak VDA concurrent stream counts.

Figure 4 SPEC SFS2014_vda StreamsSCISFS160930-004

Figure 4 shows the peak number of concurrent video camera streams supported by each system. The IBM Spectrum Scale wins in this category whether using disk or DeepFlash 150 storage, with 1600 and 1700 streams, respectively.

While the SPEC SFS and IBM ESS physical configurations were much the same as used for the SWBUILD run above, the Oracle ZS3-2 had 288 300GB 10Krpm drives and no SSD ZIL storage in this run. The IBM DeepFlash storage used 2 DeepFlash 150 systems with a total of 64-8TB flash modules.

For VDA, the IBM ESS  file system used capacity increased to 128TiB, the Oracle ZS3-2 file system used capacity increased to 33.4TiB and the IBM DeepFlash system had a 245TiB file system used capacity. The SPEC SFS, IBM ESS and IBM DeepFlash systems used 1 file system whereas, once again, the Oracle ZS3-2 used 240 file systems for this benchmark.

In Figure 5 we show the VDA ORT results.

SCISFS160930-005Figure 5 SPEC SFS2014_vda ORT

Similar to the discussion for SWBUILD above, in Figure 5 we show both the ORT and the minimum RT for the VDA workload. This time the response times for both the minimum and overall results show a gradual improvement under the VDA workload with the IBM DeepFlash 150 storage coming in with a 4.1msec ORT and a min RT of 1.6msec.  But being all flash, we would expect it to provide the best response times.

In Figure 6 we show our VDA MB/Sec per drive metric.

Figure 6 SPEC SFS2014_vda MB/Sec per drive resultsSCISFS160930-006

Figure 6 shows that the IBM Spectrum Scale with DeepFlash 150 did 122.2 MB/Sec per flash module while the Oracle ZS3-2, IBM Spectrum Scale with ESS and SPEC SFS did 12.8, 19.2 and 0.7 MB/Sec per drive respectively. Even though the number of builds weren’t that different for the two IBM Spectrum Storage solutions, the flash based DeepFlash storage managed almost 6X more MB/Sec per drive than the disk based ESS did.

Significance

More submissions for SPEC SFS2014 is good. We are still experimenting to determine which metrics/charts show the best story for these new benchmarks, so forgive us for all the charts.

Both the IBM Spectrum Scale with ESS disk and DeepFlash 150 storage seems the best systems for VDA, so far. The Oracle ZS3-2 wins at SWBUILDs for the moment. Unclear why the MB-Sec for SWBUILD don’t show a corresponding difference similar to the number of concurrent build streams but we will continue to investigate. The relationship for VDA between streams and MB/Sec (not shown) vary much closer together.

On the other hand, for VDA, flash seems to be able to sustain more bandwidth than disks and definitely provides better response times than disk in VDA. This didn’t seem as much as an advantage for SWBUILD, but Oracle’s ZS3-2 only used flash (SSDs) for their ZIL.

Any suggestions on how to improve any of our performance analyses are welcomed.  Additionally, if you are interested in more file performance details (Top 20 SPECsfs2008 results) and our NFSv3 and CIFS/SMB (SPECsfs2008) ChampionsCharts™, please consider purchasing our recently updated (September, 2016) NAS Buying Guide, available on SCI’s website (see QR code below).

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (September 2017) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2016.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

 

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community.

 

[1] All SPEC SFS2014 information is available at https://www.spec.org/sfs2014/  as of 29Sep2016