SCI 26Sep2014 Latest SPECsfs2008 CIFS/SMB & NFS performance report

In Arkologic SMA 8100, CIFS/SMB throughput, Dell EMC, Gluesys VTS 7200, Isilon, NFS throughput, ORT, S210, SPECsfs, SPECsfs2008 by Administrator1 Comment

This Storage Intelligence (StorInt™) dispatch covers SPECsfs2008 benchmark results[1]. There have been four new SPECsfs2008 results since our last report in June of 2014. Three of the new submissions are for NFSv3 and the fourth is for CIFS/SMB. Two (NFSv3 and CIFS/SMB) are from EMC using their latest S210 Isilon storage cluster, the other two (NFSv3 only) are GlueSys VTS 7200 and Arkologic SMA 8100 storage systems. We start our analysis with CIFS/SMB results.

CIFS/SMB results

We begin our CIFS/SMB discussion with our Top 10 Overall Response Time (ORT) in Figure 1.

Bar chart showing top 10 Overall Response Time SPECsfs2008 results for CIFS/SMBFigure 1 Top 10 CIFS/SMB Overall Response Time Results

Recall that SPECsfs2008 ORT is an average response time calculated across the entire SPECsfs2008 benchmark run. In Figure 1 one can see the new 14 node, Isilon S210 with QBR Infinband cluster interconnect has the #2 ORT at ~1.1msec. The new Isilon system was running OneFS 7.1.1 with SSDs used for metadata write acceleration and had about 10TB of SSDs. The Isilon S210 storage cluster also had ~3.6TB of DRAM cache and 173TB of 600GB 10Krpm SAS disks for backend storage.

Perhaps a more interesting view on the new S210 performance is shown in Figure 2, the top 10 CIFS/SMB throughput operations/second (ops/sec) results.

Figure 2 Top 10 SPECsfs2008 CIFS/SMB throughput operations per secondBar chart showing top 10 SPECsfs2008 CIFS/SMB throughput operations per second Top 10 results

Now in Figure 2 we see the 14 node, Isilon S210 cluster coming in at #5, out performing a 28 node, Isilon S200 storage cluster (#6 above) at ~364K vs. ~302K CIFS/SMB throughput ops/sec. Isilon’s new system was able to beat the older system with ~1/2 the disk drives 332 vs. 672 for the 14 node, S210 cluster and 28 node, S200 cluster, respectively. Perhaps a better comparison would be the 14 node, S200 cluster (#7) which only reached ~201K CIFS/SMB throughput ops/sec using a more similar 316 drive configuration. This says that the new hardware/software is ~1.8X faster than the older system.

In Figure 3 we return to a perennial favorite, the NFS vs. CIFS/SMB performance chart.

Scatter plot with linear regression showing CIFS/SMB throughput vs NFS throughput for the same hardware systemsFigure 3 SPECsfs2008 NFS vs. CIFS throughput operations per second scatter plot with linear regression

Recall that our NFS vs. CIFS/SMB performance chart only plots vendor submissions using the same hardware and software for both NFSv3 and CIFS/SMB benchmarks. The latest addition is the EMC Isilon S210. As discussed above, the S210 attained ~364K CIFS/SMB ops/sec, but it also was only able to reach ~250K NFSv3 ops/sec. According to the linear regression, a typical system can generate ~1.4X more CIFS/SMB ops/sec than the same system using the NFSv3 file access protocol. Note: this is CIFS/SMB version 1.1, and does not include any performance enhancements, that came in SMB2.2 and SMB3, which dramatically improved SMB’s ops/sec performance.

A couple of caveats are in order:

  • We are not comparing the performance of the exact same protocols. CIFS/SMB is a state-full protocol and NFSv3 is a stateless protocol so this should give an advantage to CIFS/SMB.
  • There are proportionately more data transfer requests in the CIFS/SMB workload than the NFSv3 workload, i.e. 29.1% of the CIFS operations are performing READ_ANDX and WRITE_ANDX while only 28% of the NFSv3 operations are performing READ and WRITE operations. I believe having less data transfers should provide an advantage to NFSv3[2].
  • There aren’t many dual NFSv3-CIFS/SMB vendor submissions, only 14, six of which are EMC Isilon systems. So we could be just seeing an artifact of EMC Isilon solution CIFS/SMB performance rather than a trend that applies to all vendors.

NFSv3 results

As for NFSv3 performance the only real changes to our Top 10 charts is shown in Figure 4,Top 10 ORT results.

Figure 4 Top 10 SPECsfs2008 NFS Overall Response Time resultsBar chart top 10 SPECsfs2008 NFS Overall Response Time ORT results

Here we see two new systems coming in at #1 and #2. I had never heard of the Arkologic storage before but they submitted an all-flash storage system and it shows in their (#2) ORT of ~0.33msec with ~141K NFSv3 ops/sec. The Gluesys VTS 7200 at 0.29msec ORT (#1) had just a single SSD of 480GB used as a read cache but it only reached ~12.3K NFSv3 ops/sec. Both the Arkologic and Gluesys VTS 7200 system SPECsfs2008 performance graphs (see the HTML version of the reports) are almost flat in comparison to other submissions we have seen. It would seem that both systems could have provided more throughput performance but were held back, possibly to show better ORT results, but I can’t be sure.


I am always happy to see new CIFS/SMB submissions. I am still hopeful that SPECsfs will someday move off of CIFS/SMB1.1 onto something more recent, like SMB3 but this continues to be only rumor and a remote possibility at best. And it’s especially enjoyable when vendors submit the same hardware and software under both CIFS/SMB and NFSv3 benchmarks.

As for NFS ORT, today all top 10 systems have some flash storage and at least three only had SSD storage (HUS 4100, EMC VNX8000 and Arkologic). SSD storage systems excel in providing very low access times. This is very similar to what we see in SPC-1 block storage results. There, all of the top 10 LRT systems have lots of SSDs in them. The last of the all disk SPECSFS2008 NFSv3 submissions is relegated to 17th place in this quarter’s ORT results.

As always, suggestions on how to improve any of our performance analyses are welcomed. Additionally, if you are interested in more file performance details, we now provide a fuller version (Top 20 results) of some of these charts and a set of new NFSv3 and CIFS/SMB ChampionsCharts™ in our recently updated (April 2019), NAS Buying Guide available from our website. Also Top 20 versions of some of these charts are displayed in our recently updated (May 2019) SAN-NAS Buying guide also purchasable from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2014.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers. ]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPECsfs2008 results available from as of 24Sep2014

[2] Please see the SPECSFS2008 User’s Guide available at:


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.