SCI’s latest SPEC sfs2014 performance report as of December 2016

In Build Ops/Sec, DeepFlash 150, Elastic Storage System, Oracle ZFS ZS3-2, Spectrum Scale, Stream Ops/Sec, SW Build workload, VDA workload by Administrator0 Comments

 

This Storage Intelligence (StorInt™) dispatch only covers SPECsfs2014 benchmark[1] results. SPECsfs2008 has been retired. There have been no new SPECsfs2014 submissions since our last report so we will show and discuss two charts we didn’t cover last time.

SPEC SFS2014_swbuild (SWBUILD) results

The SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD build operations/second.

Figure 1 SPEC SFS2014_swbuild results, min & max build ops/sec number of build ops/sec

Figure 1 shows the maximum build ops/sec achieved by the three solutions. A SWBUILD operation consists of various file requests consisting of: 70% file stat, 7% write file, 6% read file, 5% chmod file, 2% readdir, 2% unlink, 1% mkdir and 1% create requests.

In comparing SPECsfs2014 against SPECsfs2008 the max number of (SWBUILD) ops/sec is probably the closest number to what was previously called NFS or CIFS/SMB “throughput operations per second”. The only challenge in comparing the two metrics is that each of the SPECsfs2014 workloads differs significantly in operational profile of file requests that are issued and none match SPECsfs2008 exactly. For example, in SPECsfs2008 the top 5 file requests (by percentage) were 26% gettattr, 24% lookup, 18% read, 11% access and 10% write. FSSTAT was way down at 1%.

On the other hand, SPECsfs2008 had over 80% metadata operations, different ones but still metadata operations. SPECsfs2014 had over 85% metadata operations. So, in that sense the two are somewhat similar.

In SPECsfs2008, Oracle ZFS ZS3-2 achieved just over 210.5K NFS throughput operations per second and here with the SWBUILD workload they achieved just over 116.0K SWBUILD ops/sec. For the SPECsfs2008 submission they used 8 600GB SSDs and 136 SAS disks, for their SPECsfs2014 submission they also used 8 73GB SSDs and 144 SAS disks. Over all other than the capacity of SSDs, the Oracle ZFS ZS3-2 hardware looks very similar.

So, we can infer, at least when comparing SPECsfs2014 SWBUILD ops/sec to SPECsfs2008 NFS ops/sec that:

SPECsfs2014 SWBUILD max ops/sec = SPECsfs2008 NFS throughput ops/sec * 0.55

 A couple of provisos are in order:

  • Although the two systems use similar hardware they are not the same.
  • The time difference between the two submissions would indicated the microcode/functional code is probably not the same.
  • SPECsfs2014 uses the full OS file stack and SPECsfs2008 uses a special purpose file “NFS” or “CIFS/SMB” stack.
  • There is only one sample. Any proper correlation/linear regression for comparing SWBUILD to SPECsfs2008 submissions would take many more instances of similar hardware submissions for both SPECsfs2008 and SPECsfs2014 SWBUILD.

In Figure 2 we show similar data for the VDA workload.

Figure 2 SPEC SFS2014_VDA total ops/sec results

SPECsfs2014 VDA workload is significantly different than the SWBUILD workload above and the SPECsfs2008 workload discussed previously. VDA is divided up into VDA1 and VDA2, of which VDA1 is 100% write and VDA2 is 84% rand(om) read, 5% (sequential?) read, 3% readdir, 2% access, 2% rmw (read-modify-write?), 2% stat, 1% create and 1% unlink.  As near as I can figure the VDA1 (100%) write workload operates 90% of the time with the remainder using VDA2 operational profile.

In contrast to the SWBUILD workload which was over 85% metadata operations, the VDA workload is over 90% (read or write) data IO.  Looking at the Oracle ZFS ZS3-2 hardware as a comparison, the configuration for the VDA submission was different from the SWBUILD configuration and thus, the SPECsfs2008 configuration using 288 SAS disks only, no SSDs.

So, it’s probably even more incorrect, from a statistical perspective to compare VDA to NFS (in SPECsfs2008) but if we did:

SPECsfs2014 VDA max ops/sec = SPECsfs2008 NFS throughput ops/sec * 0.04

All the provisos stated in the SWBUILD discussion above apply here but even more so. And I would add:

  • Although the two systems used in the comparison above were the same model, their hardware wasn’t that similar.
  • The VDA workload is MUCH MORE DATA IO intensive than the NFS workload ever was so comparing the two is much more like comparing Apples to Coconuts.

Being more DATA IO intensive also tells us any comparison between SWBUILD and VDA ops/sec is probably fruitless as well.

Significance

I think I have offended the whole benchmark community with that last comparison so we must leave it there. It would be great if a couple of other vendors submitted similar hardware for any SPECsfs2014 workloads as they used for SPECsfs2008 so we could zero in on a better statistical comparison between the two. But for now, one is all we have.

Also, we are still looking for a few brave vendors to be first to submit results for the SPECsfs2014 VDI and DATABASE workloads. However, I would be just as happy with more VDA submissions as they look significantly different from SPECsfs2008 NFS and CIFS/SMB workloads to warrant closer scrutiny.

Any suggestions on how to improve any of our performance analyses are welcomed.  Additionally, if you are interested in more file performance details (Top 20 SPECsfs2008 results) and our NFSv3 and CIFS/SMB (SPECsfs2008) ChampionsCharts™, please consider purchasing our recently updated (December, 2016) NAS Buying Guide, available on SCI’s website (see QR code below).

This report was sent out to subscribers as part our free, monthly Storage Intelligence e-newsletter. If you are interested in receiving future storage performance analyses along with recent product announcement summaries, please use the QR code below right to signup for your own copy.

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (June 2017) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in December of 2016.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community.

[1] All SPEC SFS2014 information is available at https://www.spec.org/sfs2014/  as of 26Dec2016