We return now to our end of the year review of file system performance and our analysis of the latest SPECsfs® 2008* benchmark results. Unfortunately there have been only two new NFS and no CIFS/SMB submissions since our September report. One of the new submissions was a Huawei Oceanstor N8500 24-node cluster and the other was a submission from NetApp their new FAS3250 mid-range system. As previously indicated prior reports are available from our website. 
Latest NFS results
As shown in Figure 1, Huawei’s 24-node Oceanstor has achieved new number one performance in NFS throughput operations per second. In addition to the 24 nodes in the cluster, it also used a little over 43TB of SSDs. You may recall that NetApp also submitted a 24-node NFS solution and EMC Isilon beat everyone out with a 140-node cluster, but the only submission that had anything like this much SSD was the EMC VNX VG8/VNX5700 submission (#10 in Figure 1), which had over 87TB of SSD.
Unfortunately, the new NetApp FAS3250 submission did not make it into the top 10 NFS throughput results as it only reached a more modest, ~101K NFS throughput operations per second. It’s interesting to note that only a few years ago the FAS3250 100K result would have achieved top 10 status but today all the top performers are clustered NAS systems.
None of our other Top 10 performance charts changed for this analysis. But we show below one of our unending charts on CIFS/SMB vs NFS performance comparisons between the two most popular interface protocols available for NAS storage today.
Figure 2 NFS vs. CIFS/SMB throughput operations per second per spindle
In Figure 2, we show a scatter plot of CIFS/SMB throughput operations per second and NFS throughput operations per second on a per spindle basis. On the horizontal axis is the number of disk spindles used in the submissions and on the vertical axis one can see the throughput operations per second achieved by the submission.
We also plot the linear regression line for each protocol. As one can see above, both regression coefficients are high (R**@>0.8) indicating a good fit for linear regression and the CIFS/SMB equation shows a higher multiplier for disk spindles than NFS does (~405x vs. ~263x) indicating that for each disk spindle in a submissions CIFS/SMB systems generated ~405 CIFS/SMB simulated throughput operations per second while NFS submissions only generated ~263 NFS simulated throughput operations per second.
From our perspective this means that on average, for the same hardware CIFS/SMB generates ~1.5 more performance than NFS. A couple of caveats are in order:
- A NFS throughput operation is not the same as a CIFS/SMB throughput operation. Examining the SPECsfs2008 user’s guide one sees that CIFS/SMB operations actually have a higher percentage of data transfer operations in each throughput operation than NFS. Indeed, the SPECsfs2008 NFS description states that Read and Write operations are 28% of the workload simulated while the SPECsfs2008 CIFS/SMB description calls for 29.1% Read-Andx and Write-Andx operations in its workload simulation.
- There are currently almost three times as many disk-only NFS submissions (42) as there are disk-only CIFS/NFS submissions (15), which may render any linear regression as non-statistically significant.
- We only include submissions without SSDs, FlashCache or other NAND storage, which is why you don’t see any of the Top 10 in Figure 1 on this chart.
- Finally, we need to mention that NFS is “stateless” and CIFS/SMB is “stateful”.
All of the above make any CIFS/SMB vs NFS comparison subject to much debate. Nonetheless, we feel as one of the only benchmark that offers both protocols it is a useful and valid subject for discussion. Needless to say, this is not a popular opinion, especially with the SPECsfs community of users/developers.
The only change on this chart is the addition of the NetApp FAS3250 NFS submission somewhere around 101K NFS ops/second with ~340 disks.
Huawei’s impressive clustered N8500 NFS results are hard to ignore. Nonetheless, the fact that they used ~50TB of SSDs to attain this performance made this a less stellar result. On the other hand all the other top 10 NFS throughput results also used SSDs or FlashCache, albeit much more modest amounts (except as discussed above, for the EMC VG8 submission).
Nonetheless, NAND is here to stay and the only real question is how effective each subsystem uses flash. That topic was the subject of a prior chart showing NFS throughput operation per use of flash. I have taken a couple of attempts at such analysis using Operations/GB of flash used as well as Operations/SSD or FlashCache card used but none of these have been satisfactory (see prior dispatches for these charts/discussions).
We provide more detailed file system performance and NAS system feature analysis plus our SPECsfs2008 ChampionsCharts for NFS and CIFS in our recently updated (April 2019) NAS Buying Guide available from our website. Our last SPECsfs2008 dispatch reported on the then current ChampionsCharts. The guide, ChampionCharts and system descriptions will all be updated during the month of January for all the latest results and system announcements.
We are always interesting in improving our performance analyses for SPECsfs2008 as well as other performance benchmark results. If you have any ideas as to how to improve this analysis or how to supply better insight into file or storage system performance, please don’t hesitate to drop us a line (contact information is at bottom left, in the footer).
[This performance dispatch was originally sent out to our newsletter subscribers in December of 2012. If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports.
Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.