SCI’s latest SPEC sfs2014 performance report as of June’17

In Builds, Elastic Storage System, FlashSystem 900, SPECsfs, SPECsfs2014, Spectrum Scale, Streams, SW Build workload, VDA workload by Administrator0 Comments

This Storage Intelligence (StorInt™) dispatch only covers SPEC sfs2014 benchmark[1] results. There have been no new SPEC sfs2014 submissions since our last report. So, below we show some charts we didn’t explore last time.

SPEC SFS2014_swbuild (software build) results

As you may recall, the SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD concurrent build counts.

Figure 1 SPEC SFS2014_swbuild number of concurrent software builds

Figure 1 shows the number of software concurrent build done during the maximum load on a system. The only solution above that used AFA storage was the IBM Spectrum Scale with FlashSystem 900. The Oracle ZFS solution used disk-SSD hybrid storage with 136 10Krpm 300GB disk drives and 8 73GB SSDs and the flash storage was used for a write cache (or write log).

The SWBUILD workload is ~87% metadata operations (70% for STAT operations alone) and the rest read (6%) and write (7%) sequential file operations. We are not exactly sure what the STAT operation does but our guess is it follows directory trees and read file inodes to determine the status of each file to see if it has been changed (using modification/creation date-timestamps).

The IBM Spectrum Scale 4.2.1, Cisco UCS with FlashSystem 900 had 2 FlashSystem 900s with 12-2.9TB flash modules each, with one flash module was used as a spare per FlashSystem.

For the Oracle ZFS storage, it’s somewhat surprising that a relatively small amount of flash (~2.8% of total capacity) could provide an almost equivalent performance in concurrent build operations to the AFA storage. Yes, there’s probably several other optimizations in Oracle ZFS ZS3-2 storage but with only 20.8TB of total storage capacity it seemed to do very well.

The IBM Spectrum scale 4.2 with Elastic Storage (disk only) had 382 2TB disk devices and still managed to do only ~34% less concurrent build operations than the AFA and the hybrid system.

SPEC SFS2014_vda (video data acquisition) results

In Figure 2 we show the maximum number of concurrent VDA streams.

Figure 2 SPEC SFS2014_vda Streams results

In Figure 2, both the IBM Spectrum Scale 4.2.1 solutions (with Deep Flash 140 and FlashSystem 900) were AFA. Both the Oracle ZFS and IBM Spectrum Scale 4.2 Elastic Storage solutions were disk only.

Unclear why Oracle decided to deploy a ZFS ZS3-2 disk only solution for VDA vs. a hybrid solution for SWBUILD but they didn’t do nearly as well here with their disk-only version.

The all disk IBM Spectrum Scale with Elastic Storage did surprisingly well achieving ~93% of the maximum concurrent VDA streams as the FlashSystem 900 based Spectrum Scale version.

The VDA workload appears to be a 10:90 R:W workload and the write streams approximate HighDef video streams (running @36Mb/s). Given the above results, it seems that a properly configured file system with disk-only storage can perform almost equivalent to all flash storage solution. This could be entirely due to the heavy write workload.

Significance

We still haven’t seen widespread adoption of the new SPEC sfs2014 benchmark. Unclear what the reluctance is based on but it’s our guess that the benchmark reported numbers aren’t nearly as high as the old SPEC sfs2008 metrics.

Still trying to figure out the best way to report SPEC sfs2014 results. At some point, when there’s enough submissions (if ever) we plan to show top ten charts like we use for other performance reports. But in the meantime, we may experiment with a few variants of the above charts. Any ideas on other metrics of interest to report, please do let us know.

Any suggestions on how to improve any of our performance analyses are always welcomed.

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (September 2017) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in June of 2017.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

[1] All SPEC SFS2014 information is available at https://www.spec.org/sfs2014/  as of 27Jun2017