SCI 27Mar2015 Latest SPECsfs2008&SPECsfs2014 performance results analysis

In CIFS/SMB throughput, NFS throughput, ORT, SPECsfs, SPECsfs2008, SPECsfs2014by AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers the new SPECsfs2014 and SPECsfs2008 benchmarks[1]. SPECsfs2008 is in “retirement status” and SPEC is only accepting new submissions through 27 April.

The new SPECsfs2014 was discussed at length in my last SPECsfs report but as of 27 March, there have been no vendor submissions other than the SPEC reference solutions (for database, swbuild, VDA & VDI). As a result, we will discuss a pair of SPECsfs2008 metrics that haven’t received a lot of attention in past reports.

SPECsfs2008 performance metrics

SCISFS150327-001Figure 1 CIFS/SMB1.1 ORT vs. NFSv3 ORT

Recall that ORT is overall response time (or average response time) across the whole SPECsfs2008 benchmark duration. In Figure 1 we chart only those solutions that have submitted both a CIFS/SMB1.1 and NFSv3 benchmark for the exact, same hardware configuration. What Figure 1 shows is a comparison of ORT for CIFS/SMB1.1 vs. NFSv3 under similar hardware solutions

As can be seen in Figure 1, CIFS/SMB1.1 is faster (or rather has a lower ORT), on average than NFSv3 for these hardware solutions. The coefficient in the linear regression formula for X is ~0.43 which means that CIFS/SMB1.1 ORT is ~0.4 faster than of NFSv3 for these submissions.

However, a couple of caveats are in order:

There are not many matching hardware CIFS/SMB1.1 and NFSv3 submissions. We show 14 in our SPECsfs2008 database. So linear regressions and correlations may be a bit premature.

  1. CIFS/SMB1.1 is a “statefull” protocol and NFSv3 is a “stateless” protocol. So the differences in Figure 1 may be more of an artifact of the protocols than superior implementations. (Although my friend Dilip Naik, fellow file system MVP, keeps telling me that the later versions of each are starting to converge).
  2. SPEC has told me repeatedly that you cannot compare CIFS/SMB1.1 to NFSv3 for many reasons, e.g. #2 above and they have vastly different commands.
  3. We may be seeing an artifact of the differing implementations for the two different protocols rather than an efficiency advantage

I would counter #3 above by saying that both protocols are doing server file IO and data transfer. In fact, CIFS/SMB1.1 is doing relatively more data transfers according to its operational profile than NFSv3[2]. So, in the big scheme of things, they are both doing similar data and metadata IO activities, just using different protocols to do them.

In Figure 2 we turn from response times to operational efficiency and show operations per second per GB flash for SSD only and Hybrid NFSv3 submissions.

Figure 2 Op/sec per GB flash for SSD-only and Hybrid (disk-SSD) storage SCISFS150327-002

First, the correlations are very poor for hybrid solutions (R**2 of ~0.33), and somewhat better for SSD-only (~0.45). Moreover, we can see a definite bifurcation (maybe even trifurcation) in the hybrid data series as groups of submissions go off into the stratosphere (2M, 3M & 5+ operations per second) with relatively small amounts of flash.

But what I find most interesting about Figure 2 is the relatively poor ops/second flash efficiency of the all SSD systems at ~2.4 NFSv3 throughput ops/sec per GB flash vs. the ~15.8 ops/sec per GB of flash of the hybrid systems, a ~6.3X advantage for hybrid systems.

I can read this a number of ways:

  1. Hybrid systems are much more efficient because they are able to use flash for data that needs high performance and use disk for data that doesn’t.
  2. SSD only solutions have some natural disadvantages for file IO:
    1. Because of a lack of rotating media, all Flash systems have to support all IO activity metadata and data access. For example, having to sustain all write IO into flash rather than using DRAM caching and then fast writing to disk; or
    2. Sequential file IO may be better served from disk rather than flash. (We assume the NFSv3 IO activity has a significant sequential component, although unstated in its user’s guide).

But due to the poor correlations it’s probably not incorrect to make any inferences from the data. Nonetheless, it does show that all-flash storage is not a great solution for some file storage, at least for IO activity simulated by SPECsfs2008 NFSv3.

Significance

We rarely show the CIFS/SMB1.1 vs. NFSv3 ORT chart because it hasn’t changed much and probably has more to say about the statefulness of the protocol rather than the speed of implementations. However, regardless of statefulness of each protocol, one can conclude that for these limited selection of systems, CIFS/SMB1.1 is faster than NFSv3. Given, the 4 new workloads in SPECsfs2014 and its new dependence on OS file software stacks, we are hopeful that we will see even more SMB3 vs NFSv3 submissions using the same exact hardware.

Historically, for disk-only solutions we have reported IOPS/disk drive effectiveness. We have been searching for a similar metric for all-flash and hybrid storage systems but have yet to find anything that is comparable. We show IOPs/GB flash periodically as one alternative. We have also used IOPS/SSD or Flash Cache card as another alternative. Although that seems to be even less useful. We will continue our search and hopefully as the new SPECsfs2014 rolls out we will be able to come up with an even better metric.

There’s probably a couple of other charts I can roll out on SPECsfs2008 data but I am running beginning to run low on other “interesting” metrics. Hopefully, by the next time we report on SPECsfs activity there will be some vendor submissions for SPECsfs2014 to review.

Until then, as always, suggestions on how to improve any of our performance analyses are welcomed. Additionally, if you are interested in more file performance details, we now provide a fuller version (Top 20 results) of some of these charts and a set of new NFSv3 and CIFS/SMB ChampionsCharts™ in our recently updated (April 2019), NAS Buying Guide available from our website. Also Top 20 versions of some of these charts are displayed in our recently updated (May 2019) SAN-NAS Buying guide also purchasable from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in March of 2015.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers. ]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPECsfs2014 information is available at https://www.spec.org/sfs2014/ & SPECsfs2008 is available at https://www.spec.org/sfs2008/ as of 27Mar2015 &

[2] We’ve discussed this in previous reports and blog posts but CIFS/SMB1.1 operational profile has it doing 29.1% read_andx & write_andx commands vs. NFSv3 is doing 28% read & write commands. See SPECsfs2008 User’s Guide https://www.spec.org/sfs2008/docs/usersguide.html for more information.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.