SCI 22Dec2014 SPECsfs2014 v SPECsfs2008 performance report

In DB workload, SPECsfs, SPECsfs2014, SW Build workload, VDA workload, VDI workloadby AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers the new SPECsfs2014 benchmark[1] that was just introduced. While some of my readers may be aware of this latest SPECsfs version, there are many others which may not know of the changes in this recent revision. So, as there is only one official submission for the 4 different SPECsfs2014 workloads and they are all the same storage system (the SPEC SFS® Reference Solution) we thought that discussing some of the SPECsfs2014 differences from SPECsfs2008 would be an appropriate way to transition to the new benchmark. So, here goes.

SPECsfs2014 measures file performance from the O/S

One ongoing challenge with SPECsfs2008 was that it used its own drivers for NFS and CIFS/SMB. As such, it was hard to change from say SMB1 to SMB3, or for that matter to introduce other file system protocols.

SPECsfs2014 has taken a different tack altogether uses a POSIX file level interface and the operating system’s file system protocol implementation rather than having its own protocol driver. This makes it much easier to support newer levels of file protocols, like SMB3 or NFSv4.

But it also moves the file IO performance measurement point from right at the host to storage interface to above the O/S file protocol stack, more at the application level. This should impact response time measurements by adding Host OS protocol overhead to SPECsfs’s ORT measurement.

SPECsfs2014 uses multiple application workloads

Another, major SPECsfs2014 change was adding in other file usage workloads than standard user home directory activities. Previously, SPECsfs2008 simulated only one file user workload. Although there were slight differences between CIFS/SMB1 and NFSv3 command operational profiles, both were intended to mimic a standard file system usage, with the same file size distributions.

In contrast, SPECsfs2014 incorporates four distinct application workloads in its benchmark measurements. Storage vendors can select which of these to benchmark their systems against. The four workloads include:

  • Database workload: this is intended to simulate the use of a file system in an OLTP database application environment. It uses two distinct IO distributions one for DB tables and the other for DB Log files. The two profiles represent the types of file IO activity that would be present if one used a file system to host a database application workload.
  • Software build (SWBuild) workload: this is intended to simulate the use of a file system to support software or code development activities. There are distinct file transfer size profiles for software build read cycle and write cycle.
  • Video data acquisition (VDA) workload: this is intended to mimic the use of a file system in video surveillance in a multi-video camera environment. There are two different activities present here, data acquisition and video readout. Maintaining bit stream performance is a critical requirement for this workload.
  • Virtual desktop infrastructure (VDI) workload: this simulates the use of a file system to support multiple, virtual user desktop activity under a full-clone environment. There is one knowledge worker operational profile that is used with different file distributions for read and write activity.

SPECsfs2014 multiple workloads represent a realization in that file systems are being used for a lot more application environments these days. Much of this is due to software vendors like Oracle, VMware, Microsoft and others taking advantage e of file system storage whereas previously they only made use of block storage.

SPECsfs2014 measures ORT, IOPS and throughput

SPECsfs2008 only reported measurements for file operations per second (IOPS) and overall response time (ORT). In contrast, SPECsfs2014 measures application (DATABASE, SWBuild, VDA, or VDI) IOPS, ORT and data throughput in MB/sec.

In many ways, the use of more workloads requires more application specific performance measures. For example, for VDA the number of concurrent cameras or for VDI the number of active knowledge workers are critical performance measures. But the problem is that these numbers don’t really reflect common storage performance measures. So SPECsfs2014 wisely chose to now report on storage throughput in MB/sec, as this metric can be useful for any application workload and is more universally accepted storage performance metric.

Significance

It was about time for a new SPECsfs benchmark. The use of CIFS/SMB1 was getting old and no longer representative of industry practice (at least for Windows Server). It would have been better if SPECsfs had implemented their own protocol drivers but it would taken longer to migrate to newer versions of file system protocols. So this was a reasonable compromise. Now storage system vendors can take advantage of the latest file system protocols as soon as at least one OS vendor has implemented it. The only downside is that we add the O/S protocol stack to SPECsfs’s ORT measurement, but it can’t be helped.

As for multiple application workloads, we understand the need to move with the marketplace as the use of file system storage evolves. But the problem is where do you stop. Why didn’t they add VMDK/VHD workload for virtual applications or media encoding and distribution or a bio-informatics workload, etc. The advantage of SPECsfs2008’s one file use protocol is that there was one set of performance measures for each and every storage system benchmark. With four workloads today, SPECsfs2014 comparisons can almost be tailored by the storage vendor to whatever they do best.

The end result is a proliferation of un-comparable results, if vendors submit storage for only one workload benchmark. One way to get around this problem is to make all vendors submit storage systems for all workload benchmarks but this seems unlikely.

On the other hand, customers that make use any of the SPECsfs2014 workloads should be happier as they should be able to see storage system rankings specific to their environment. In the end, customers should benefit from more workloads, even if it makes the comparison of storage systems across workloads more difficult.

As always, suggestions on how to improve any of our performance analyses are welcomed. Additionally, if you are interested in more file performance details, we now provide a fuller version (Top 20 results) of some of these charts and a set of new NFSv3 and CIFS/SMB ChampionsCharts™ in our recently updated (April 2019), NAS Buying Guide available from our website. Also Top 20 versions of some of these charts are displayed in our recently updated (May 2019) SAN-NAS Buying guide also purchasable from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in December of 2014.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers. ]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPECsfs2014 information is available at https://www.spec.org/sfs2014/ as of 22Dec2014

Leave a Reply