SCI 26Mar2014 Latest SPECsfs2008 NFS & CIFS/SMB performance results

In CIFS/SMB throughput, Dell EMC, FAS Series, FAS3140, FAS8020, Isilon, NetApp, ORT, S200, SPECsfs, SPECsfs2008 by AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch covers SPECsfs2008 benchmark results[1]. There have been two new SPECsfs2008 results since our last discussion at the end of 2013. The two submissions are both for the new NetApp FAS8020 storage system, one run for NFSv3 and the other for CIFS/SMB. The new FAS8020 submissions used the exact same hardware configuration for both benchmarks, enabling us to update our perennial CIFS/SMB vs. NFSv3 charts and other CIFS/SMB charts which are discussed below.

CIFS/SMB vs. NFSv3 results

We begin our discussion with a scatter plot of CIFS/SMB vs. NFSv3 performance results in Figure 1.

Scatter plot, depicting relative throughput performance for NFSv3 vs. CIFS/SMB with a linear regression. Showing only those systems that performed both SPECsfs2008 benchmarks with the same hardware configurationFigure 1 SPECsfs2008 CIFS/SMB vs. NFSv3 scatter plot

Figure 1 plots 13 pairs of benchmark results where two storage systems, with the same hardware configuration were run under both NFSv3 and CIFS/SMB protocols. NFSv3 throughput results for the systems were plotted against the horizontal axis and CIFS/SMB results against the vertical access. The formula shows a linear trend line (Excel) is also plotted with a regression coefficient of 0.98 and a formula that says CIFS/SMB throughput operations/second results are 1.369 times the NFSv3 throughput operation results minus a constant of 4388.

We occasionally show this chart. Prior to NetApp’s FAS8020 CIFS/SMB and NFSv3 submissions the linear trend line formula was 1.366*NFSv3 results plus 1306.1.

Some caveats warrant discussion at this point:

  • There are only 13 submissions with the same hardware for both CIFS/SMB and NFSv3. Such a scarcity of results could skew any comparisons between the two with a single vendor’s submissions. More results would make these comparisons more accurate.
  • NFSv3 is a state-less protocol and CIFS/SMB is a state-full protocol. This should provide an advantage to CIFS/SMB.
  • As NFSv3 and CIFS/SMB are two different protocols it’s not quite proper to compare the two in the fashion we propose here. In our defense, both are file service protocols, doing directory plus data transfer operations. From an end user perspective, it shouldn’t really matter whether the O/S and the storage are talking CIFS/SMB or NFSv3 protocols when a user requests a file, roughly the same amount of data transfers and directory operations need to be completed.
  • When examining the workload characteristics for CIFS/SMB and NFSv3 from the SPECsfs2008 users guide[2], the relative proportions of read and write (NFSv3) commands and read_andx and write_andx (CIFS/SMB) commands are not exactly the same. In fact, the percentages are 18% NFSv3 read and 9% NFSv3 write vs. 20.5% CIFS/SMB read_andx and 8.6% CIFS/SMB write_andx. The rest of the two workloads consist of directory and other meta-data operations. But it would seem that for an average “CIFS/SMB throughput operation” there is slightly more data transfer taking place than for the average “NFSv3 throughput operation”. So this should cause CIFS/SMB to have a slight disadvantage in any comparison between the protocols.

In Figure 2 we show a slightly different view of CIFS/SMB vs. NFSv3 performance, this time comparing throughput performance vs. drive counts.

Figure 2 SPECsfs2008 CIFS/SMB vs. NFSv3 throughput performance vs. drive counts.Scatter plot showing relative throughput ops/second per disk drive counts for NFSv3 and CIFS/SMB submissions with linear regression lines for both

The difference between Figures 1 & 2 are that in Figure 2, we show all NFSv3 and CIFS/SMB submissions not just the ones that had the same hardware. And we compare the throughput performance per number of disk drives in a configuration. Nonetheless, once again CIFS/SMB shows better overall performance than NFSv3. In this case ~45% better performance. Again all the same caveats apply to this chart as to Figure 1 above, and we would add that there are many more NFSv3 submissions than CIFS/SMB submissions so comparing the two should be considered more risky. Also as can be seen in the above there are very few CIFS/SMB submissions with 400 to 1600 disk drives which can further skew any comparisons to be more valid for low-end systems only.

CIFS/SMB SPECsfs2008 results

The latest NetApp FAS8020 results also broke into the top 10 CIFS/SMB throughput operations per second, which we show in Figure 3.

Bar chart showing the top 10 CIFS/SMB SPECsfs2008 throughput ops/sec Figure 3 Top ten CIFS/SMB throughput operations per second results

As seen in Figure 3, the single node FAS8020 came in tenth place, ~5% below the 7-node EMC Isilon submission in overall CIFS/SMB throughput results. The FAS8020 had a 1TB of Flash Cache and 144 disk drives. Most of the other systems in Figure 3 had multiple node configurations or multiple X-blades, so given this the FAS8020 did well.

 

 

 

Next we turn to CIFS/SMB ORT results in Figure 4.

Figure 4 Top ten CIFS/SMB ORT resultsBar chart showing top 10 CIFS/SMB SPECsfs2008 overall response time results

In Figure 4, the new NetApp FAS8020 came in fourth place, with a 1.42 millisecond ORT just behind an older FC disk only FAS3140. The newer system uses SAS disks not FC and this could explain the slight degradation in ORT but one should also consider that the FAS8020 provided a lot more (~2.6X) throughput performance than the older system. Recall that SPECsfs2008 ORT is measured as an average response time across the whole benchmark run so more throughput operations per second could definitely have been a factor here.

Significance

It’s nice to see some new CIFS/SMB results and from our perspective, it’s even better to see the same hardware on both NFSv3 and CIFS/SMB so we can compare the two protocols as well as individual storage system performance. Just to be fair the FAS8020 achieved 110K NFSv3 throughput operations per second and only 105K CIFS/SMB throughput operations per second, so for the FAS8202 its CIFS/SMB performance was ~5% worse than its NFSv3 performance.

I was beginning to think there would be no more CIFS/SMB results until SPECsfs moved up to SMB3. We believe that SMB3 would show substantially better performance than SMB1 that SPECsfs currently uses. Not all NAS systems support SMB3, which may be the hold up.

As always, suggestions on how to improve any of our performance analyses are welcomed. Additionally, if you are interested in more file performance details, we now provide a fuller version (Top 20 results) of some of these charts and a set of new NFSv3 and CIFS/SMB ChampionsCharts™ in our recently (April 2019) updated, NAS Buying Guide available from our website. Also Top 20 versions of some of these charts are displayed in our recently updated (May 2019) SAN-NAS Buying guide also purchasable from our website.

http://silvertonconsulting.com/cms1/product/san-nas-storage-buying-guide/[This performance dispatch was originally sent out to our newsletter subscribers in March of 2014.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.

[1] All SPECsfs2008 results available from http://www.spec.org/sfs2008/results/ as of 21Mar2014

[2] Please see http://www.spec.org/sfs2008/docs/usersguide.html as of 26Mar2014.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.