Latest SPEC sfs2014 performance results as of September ’18

In Build MinRT-ORT, Build Ops/Sec/Drive, E8 Storage, E8 Storage, Huawei, IBM, OceanStor, OceanStor 5500 v5, ORT, SPECsfs2014, Spectrum Scale, SW Build workload, WekaIO, X24by AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers SPEC sfs2014 benchmark[1]results. There have been only two new SPEC sfs2014 submissions for the SWBUILD workload this past quarter. These were a new all NVMe-flash,E8 Storage X24 behind IBM Spectrum Scale 5.0.1.1 and an all disk, Huawei OceanStor 5500 v5. There were no new DATABASE, EDA, VDA or VDI submissions. As we have now reached beyond 10 submissions for SWBUILD, we will show only our top 10 rankings for our SWBUILD charts. 

SPEC SFS2014_swbuild (software build) results

SWBUILD workload simulates software release builds, one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD concurrent build counts.

Figure 1 SPEC SFS2014_swbuild top 10 number of concurrent software builds

In Figure 1, the new E8 Storage X24-IBM Spectrum Scale submission tied for 1stplace with the WekaIO Supermicro Big Twin servers solution, with 1200 concurrent SWBUILDs. The E8-Spectrum Scale solution had 15 (Scale) client severs talking (E8) NVMeoF to a single 24 900GB NVMe SSD-E8 Storage X24 system. They were also using a 32 port 100GBE Mellanox networking switch to connect storage to client nodes. 

The new Huawei OceanStor 5500 v5 came in at new #9 with 200 concurrent software builds. In Figure 1, from #8 on, all systems are disk-only storage. As a result, the Huawei came in second place for disk only storage on SWBUILD workloads. 

You may recall that both the E8 Storage with IBM Spectrum Scale and the WekaIO with SuperMicro BigTwin servers used all NVMe SSD storage, which may be why they did so well. Both the IBM Spectrum Scale-E8 Storage systems also used NVMeoF to support client-storage server IO access.

Next, we show the Minimum (Min RT) and Overall Response Time (ORT)for SWBUILD submissions in Figure 2. We sort this ranking based on ORT, smallest to largest.

Figure 2 SPEC sfs2014_swbuild top 10 minimum (Min RT) and overall response times (ORT)

We changed the way we sort our SWBUILD RT chart to use ORT rather than Min RT. Coming from block storage we always felt minimum RT was a good indicator of caching performance. But in file systems, the workload appears more complex than raw block read and write.

We now believe ORT is a better metric to use to measure file storage responsiveness. Recall that ORT is an average RT across all levels of concurrent builds ranging from the minimum to the maximum number available for a solution. One caveat, ORT can be misleading. ORT depends on how hard the storage is driven (see discussion below).

In Figure 2, the E8 Storage X24-IBM Spectrum Scale came in as a new #1 with a ORT of 0.57msec. At #2 we have the new Huawei OceanStor 5500 with an ORT of 0.58 msec. The new OceanStor only performed a maximum of 200 concurrent builds, which allowed them to do a  0.58 msec. ORT. In contrast, the new E8-Spectrum Scale, managed over 1200 concurrent builds and came in, with just a slightly better 0.57 msec. 

From a Min RT perspective the new E8-Spectrum Storage would have come in at #2 behind the IBM Spectrum Storage-Cisco UCS-IBM FlashSystem 900 which we list as 0.049 msec. We must admit this Min RT is an estimate as SPEC sfs2018 lists their minimum RT as 0.0 msec.  It’s too bad we can’t somehow list both rankings in the same chart. We might have to break these out into two plots.

As discussed in previous SPEC sfs2014 reports,  SWBUILD is metadata intensive and as a result, metadata access RT is an important metric to track. So, we will continue to track response time metrics as we see fit. 

Finally, we show the SWBUILD Build Ops/Sec/Flash module-SSD as computed by SCI Figure 3.

Figure 3 SPEC sfs2014 SWBUILD Build Ops/Sec/SSD

First, there are only 7 (it said 8) majority flash submissions in SWBUILD. In Figure 3, we can see how efficient an AFA storage system performs Build-Ops on an individual SAS or NVMe SSD basis. The top SWBUILD submission flash efficiency is the E8 Storage X24 with IBM Spectrum Scale 5.0.1.1, with ~25K Build ops/sec/flash drive. Recall that build OPS are different than concurrent builds but there’s a direct correlation between the two (~500 build ops to 1 concurrent build). The top 3 Build Ops/Sec/Flash drive all used NVMe SSD storage.  

Significance

We are pleased to see new SPEC sfs2014,submissions, especially newer versions of previous solutions. And we continue to be impressed by some of the performance metrics coming off all disk systems, especially in this all flash age. The all disk, Huawei OceanStor 5500 v5 ORT of 0.58 msec. was pretty impressive even though it only managed 200 concurrent builds. 

Still trying to determine the best way to report SPEC sfs2014 results. We may experiment with a few variants of the above charts. Any ideas on other metrics of interest to report, please do let us know

Furthermore, suggestions on how to improve any of our performance analyses are always welcomed.

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (April 2019) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2018.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community


[1]All SPEC SFS2014 information is available athttps://www.spec.org/sfs2014/ as of 24Sep2018

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.