SCI 2010 Sep 23 latest SPECsfs2008 benchmark results analysis including NFS and CIFS

In CIFS/SMB throughput, NFS throughput, SPECsfs, SPECsfs2008 by Administrator

We once again return to analyze the latest SPECsfs® 2008* benchmark results. There were five new NFS benchmarks, one from EMC (Celerra VG8/VMAX), one from NEC (NV7500, 2 node) and three from Hitachi (3090 single server, 3080 single server, & 3080 cluster).  In addition, there were no new CIFS benchmarks

Latest SPECsfs2008 NFS results

(SCISFS100923-001) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100923-001) (c) 2010 Silverton Consulting, All Rights Reserved

The EMC Celerra VG8 showed up in the top 10 as the new #6, with 135K NFS throughput operations per second.  Surprisingly the new EMC result had no Flash/SSDs like their #10 result. None of the other new submissions reached into the top 10, with Hitachi’s 3080 cluster topping out at ~79K NFS throughput ops.

One should probably remember from our last analysis@ that the #1 and #3 HP submissions were blade systems with the NFS driving servers and NFS supporting servers in the same blade cabinet.  Any configuration like this may enjoy an unfair advantage utilizing faster “within the enclosure” switching rather than external fabric switching

None of the new NFS submissions broke into the top ten on NFS Operational Response Time so refer to our previous analysis if you want to see those results.

(SCISFS100923-002) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100923-002) (c) 2010 Silverton Consulting, All Rights Reserved

Above is a new chart for SPECsfs2008 analysis.  We show here a scatter plot of NFS throughput operations per second against the solutions total number of disk drives.  Systems with relatively more effective utilization of disk drives show up above the linear regression line and poorer ones below.  To be fair we excluded out any system that contained Flash Cache or SSDs, namely NetApp with PAM and EMC Celerra with SSDs.

Hard to identify specific subsystems here but systems with the top non-Flash/SSD NFS throughput operations are trackable.  The top three had over 330K, over 170K and over 160K NFS throughput operations per second respectively.  The latest EMC Celerra submission had 280 disk drives, the latest NEC had 287 drives and the three Hitachi submissions had 164, 162 and 82 drives for the 3090, 3080 cluster, and 3080 single server submissions respectively.

CIFS analysis

Figure 3 below is a similar chart for CIFS results.  Realize we have under half as many data points for CIFS as we have for NFS.  As a result, the regression coefficient is pretty loose at R**2=~0.5 vs. the ~0.8 for NFS.  Also, it’s hard not to notice that  few submissions do better than average in CIFS throughput vs. disk drives.  Once again, SSD usage or Flash Cache submissions were removed from this analysis for fairness.

(SCISFS100923-003) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100923-003) (c) 2010 Silverton Consulting, All Rights Reserved

I attribute Figure 3’s lack of correlation to the relatively low-end capability of the systems represented here.  A majority of these systems sustained below 10K CIFS throughput operations per second whereas more than half of the NFS submissions were over 50K NFS throughput operations per second.  These relatively low-end CIFS systems probably do not perform as efficiently as higher end systems, especially with respect to disk drives.


Our scorecard for SPECsfs 2008 submissions now stands at 36 NFS vs. 15 CIFS results.  Such skewed submissions seem unwarranted given CIFS preponderance in the marketplace but keep those NFS submissions coming in as well.

Our new throughput ops vs. number of disk drives analysis was added per request for other performance analyses and are now available for SPECsfs2008. It shows that CIFS systems have room to improve their use of disk drives while NFS systems show a reasonable spread of disk use efficiency with superior results readily obtainable.

This performance dispatch was originally sent out to our newsletter subscribers in September of 2010.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of NAS system features and performance please take the time to examine our NAS Briefing available for purchase from our website.

A PDF version of this can be found at

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

* SPECsfs2008 results from


@ Available at