SCI 2011Jun28 Latest SPECsfs2008 results analysis

In Alacritech ANX-1500-20, CIFS/SMB throughput, Cougar NAS, Dell EMC, Huawei Symantec, Isilon, NFS throughput, ORT, S200, SPECsfs, SPECsfs2008, Tivoli Storage Productivity Center, TS7610 ProtecTIER, z/OS storage capabilities by Administrator

We return now to file system performance and analyze the latest SPECsfs® 2008* benchmark results. There were nine new NFS benchmarks, five from Isilon on their new S200 hardware at various node configurations (140-, 56-, 28- 14- and 7- nodes), two from LSI Cougar 6720 at different memory configurations (34 and 50 GB), one from Huawei Symantec an 8-node, Oceanspace N8500 Clustered NAS system and one from Alacritech ANX 1500-20 with two NetApp 6070Cs as backend storage.  Both Huawei Symantec and Isilon submitted duplicate configurations under the CIFS protocol adding six new results there as well.

Latest SPECsfs2008 Results

SCISFS110628-001 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISFS110628-001 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Higher is better on this chart. Isilon took three out of the top 10 slots and Huawei Symantec came in as the new number two.  Results in this category seem to be increasingly going out of sight.

Figure 1 SPECsfs2008* Top 10 NFS throughput

Isilon’s 140-node system provides exceptional performance but the real message may be that Isilon  provides near linear performance based on node count.  For example, their fourth place result with 56-nodes achieved, ~41% of their top performer with only ~40% of the nodes.

SCISFS110628-002 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISFS110628-002 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 2 Top 10 NFSv3 ORT

Next we turn to response time.  Lower is better here.

The new Alacritech system was the top performer with an overall response time of 0.92 msec.  Also the Huawei Symantec Oceanspace system came in at number three with a .99 msec. ORT.  In addition, one can see a new LSI Cougar 6720 systems placed at number ten with 1.64 msec using 34 GB of system memory. In our last report we discussed the first NAS system to break the 1msec ORT barrier (EMC VNX VG8 with all SSD backend) and now there are two more.  While SSDs probably helped ANX it’s unclear what secret sauce Huawei is using to reach this threshold of ORT.

SCISFS110628-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISFS110628-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 3 Top CIFS throughput results

Turning to CIFS performance we examine top throughput results.  The problem here is that we almost need a log scale with the top ten performers spanning from ~1.6M to 64K CIFS throughput operations/second.  Most likely this is due to paucity of mid-range and enterprise class CIFS submissions but is striking nonetheless.

As can be seen, Isilon took five out of the top ten with their scale out NAS system. Now that EMC has acquired Isilon, EMC owns eight out of the top ten CIFS performing systems.  In addition, the new Huawei Symantec NAS submission came in at number two with 712K CIFS ops/sec.

CIFS vs. NFS Performance

We return to our recurring debate regarding CIFS vs. NFS performance.  We have been told repeatedly that cannot and should not compare CIFS and NFS performance.  Nevertheless, we find it intriguing that when looking at the exact same submission with NFS and CIFS protocols, systems seem on average to provide more CIFS throughput.  As such, whenever new data surfaces we feel obligated to report how our analysis should change.  The ongoing challenge is to produce a rational argument to convince the skeptics in my audience.

Recall the last time we discussed this topic,  a recent enterprise class system had changed our viewpoint to CIFS having only a slight advantage over NFS.  Recent, results dramatically alter that conclusion.

SCISFS110628-004 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 4 Scatter plot: CIFS vs. NFS throughput operations per second

With the addition of the Huawei benchmark and Isilon’s 5 benchmarks we have more NFS and CIFS results to examine.  As such, the latest plotted regression line now shows that the systems using the CIFS protocol can do ~1.4 more operations/second than using NFS.

By examining the SpecSFS2008 user’s guide[1] one can see that the percentage of file reads and writes while similar between NFS and CIFS benchmark operational profiles are not exactly the same. According to the documentation,

  • CIFS read and write data requests represent 29.1% of overall operations (with 20.5%:8.6% R:W) and
  • NFS read and write data requests represent 27.0% of overall operations (with 18%:9% R:W)

Also, there is no information on CIFS file size and block size distributions, and thus must conclude that distribution and averages are similar between the two workloads. Given all the above, it seems evident to state that on average, the CIFS protocol provides more read/write data throughput than a similarly configured NAS system using the NFS protocol.

As further proof, please review the following charts on NFS and CIFS throughput vs. # disk drives.

SCISFS110628-005 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 5 SPECsfs2008 Scatter plot: NFS and CIFS throughput vs. disk drive count

As one can see from the respective linear regression equations, CIFS generates about ~470 throughput operations per second per disk drive and NFS only delivers ~300 throughput operations per second per spindle.  Note both charts contain data from all disk-drive only submissions for their respective protocols.  Again, given the similarity in the user workloads, we can only conclude that CIFS is a better protocol than NFS.


First, as EMC Isilon has shown, scale out NAS systems can indeed deliver linear performance that multiplies as a function of node count.  Isilon has demonstrably put this question to rest but I would wager most other scale out NAS systems could show similar linearity.

Next, with respect to CIFS vs. NFS, more data alters the measured advantage, but the story remains the same – CIFS performs better than NFS.  Obviously, this depends solely on the number of vendors that submit both NFS and CIFS benchmarks on the same hardware but we now have 12 submissions and the data seems pretty conclusive. Admittedly three of these are from Apple and five from Isilon but at least now we have multiple systems at both the entry level and enterprise level.  We could always use more and welcome future submissions to refine this analysis.

As always we welcome any recommendations for improvement of our SPECsfs2008 analysis.  We also now include a top 30 version of these and other charts plus further analysis in our NAS briefing which is available for purchase from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in June of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of SAN storage system features and performance please take the time to examine our recently updated (April 2019) NAS Buying Guide available for purchase from our website.]

[After this went out to our newsletter readers and after we blogged about the results in the last chart we determined that EMC Isilon had been using SSDs in their testing.  We have tried to eliminate SSDs and NAND cacheing from our per disk drive spindle comparisons.  A later dispatch (released in September 2011) corrected this mistake on the last chart.  But the net result is that 1) the correlations are still pretty good, and 2) CIFS continues to exceed NFS in throughput operations per second per spindle.  Sorry for any confusion]


Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

* SPECsfs2008 results from

[1] See as of 28 June 2011