Sep 302014
 

This Storage Intelligence (StorInt™) dispatch covers SPECsfs2008 benchmark results[1]. There have been four new SPECsfs2008 results since our last report in June of 2014. Three of the new submissions are for NFSv3 and the fourth is for CIFS/SMB. Two (NFSv3 and CIFS/SMB) are from EMC using their latest S210 Isilon storage cluster, the other two (NFSv3 only) are GlueSys VTS 7200 and Arkologic SMA 8100 storage systems. We start our analysis with CIFS/SMB results.

CIFS/SMB results

We begin our CIFS/SMB discussion with our Top 10 Overall Response Time (ORT) in Figure 1.

Bar chart showing top 10 Overall Response Time SPECsfs2008 results for CIFS/SMBFigure 1 Top 10 CIFS/SMB Overall Response Time Results

Recall that SPECsfs2008 ORT is an average response time calculated across the entire SPECsfs2008 benchmark run. In Figure 1 one can see the new 14 node, Isilon S210 with QBR Infinband cluster interconnect has the #2 ORT at ~1.1msec. The new Isilon system was running OneFS 7.1.1 with SSDs used for metadata write acceleration and had about 10TB of SSDs. The Isilon S210 storage cluster also had ~3.6TB of DRAM cache and 173TB of 600GB 10Krpm SAS disks for backend storage.

Perhaps a more interesting view on the new S210 performance is shown in Figure 2, the top 10 CIFS/SMB throughput operations/second (ops/sec) results.

Figure 2 Top 10 SPECsfs2008 CIFS/SMB throughput operations per secondBar chart showing top 10 SPECsfs2008 CIFS/SMB throughput operations per second Top 10 results

Now in Figure 2 we see the 14 node, Isilon S210 cluster coming in at #5, out performing a 28 node, Isilon S200 storage cluster (#6 above) at ~364K vs. ~302K CIFS/SMB throughput ops/sec. Isilon’s new system was able to beat the older system with ~1/2 the disk drives 332 vs. 672 for the 14 node, S210 cluster and 28 node, S200 cluster, respectively. Perhaps a better comparison would be the 14 node, S200 cluster (#7) which only reached ~201K CIFS/SMB throughput ops/sec using a more similar 316 drive configuration. This says that the new hardware/software is ~1.8X faster than the older system.

In Figure 3 we return to a perennial favorite, the NFS vs. CIFS/SMB performance chart.

Scatter plot with linear regression showing CIFS/SMB throughput vs NFS throughput for the same hardware systemsFigure 3 SPECsfs2008 NFS vs. CIFS throughput operations per second scatter plot with linear regression

Recall that our NFS vs. CIFS/SMB performance chart only plots vendor submissions using the same hardware and software for both NFSv3 and CIFS/SMB benchmarks. The latest addition is the EMC Isilon S210. As discussed above, the S210 attained ~364K CIFS/SMB ops/sec, but it also was only able to reach ~250K NFSv3 ops/sec. According to the linear regression, a typical system can generate ~1.4X more CIFS/SMB ops/sec than the same system using the NFSv3 file access protocol. Note: this is CIFS/SMB version 1.1, and does not include any performance enhancements, that came in SMB2.2 and SMB3, which dramatically improved SMB’s ops/sec performance.

A couple of caveats are in order:

  • We are not comparing the performance of the exact same protocols. CIFS/SMB is a state-full protocol and NFSv3 is a stateless protocol so this should give an advantage to CIFS/SMB.
  • There are proportionately more data transfer requests in the CIFS/SMB workload than the NFSv3 workload, i.e. 29.1% of the CIFS operations are performing READ_ANDX and WRITE_ANDX while only 28% of the NFSv3 operations are performing READ and WRITE operations. I believe having less data transfers should provide an advantage to NFSv3[2].
  • There aren’t many dual NFSv3-CIFS/SMB vendor submissions, only 14, six of which are EMC Isilon systems. So we could be just seeing an artifact of EMC Isilon solution CIFS/SMB performance rather than a trend that applies to all vendors.

NFSv3 results

As for NFSv3 performance the only real changes to our Top 10 charts is shown in Figure 4,Top 10 ORT results.

Figure 4 Top 10 SPECsfs2008 NFS Overall Response Time resultsBar chart top 10 SPECsfs2008 NFS Overall Response Time ORT results

Here we see two new systems coming in at #1 and #2. I had never heard of the Arkologic storage before but they submitted an all-flash storage system and it shows in their (#2) ORT of ~0.33msec with ~141K NFSv3 ops/sec. The Gluesys VTS 7200 at 0.29msec ORT (#1) had just a single SSD of 480GB used as a read cache but it only reached ~12.3K NFSv3 ops/sec. Both the Arkologic and Gluesys VTS 7200 system SPECsfs2008 performance graphs (see the HTML version of the reports) are almost flat in comparison to other submissions we have seen. It would seem that both systems could have provided more throughput performance but were held back, possibly to show better ORT results, but I can’t be sure.

Significance

I am always happy to see new CIFS/SMB submissions. I am still hopeful that SPECsfs will someday move off of CIFS/SMB1.1 onto something more recent, like SMB3 but this continues to be only rumor and a remote possibility at best. And it’s especially enjoyable when vendors submit the same hardware and software under both CIFS/SMB and NFSv3 benchmarks.

As for NFS ORT, today all top 10 systems have some flash storage and at least three only had SSD storage (HUS 4100, EMC VNX8000 and Arkologic). SSD storage systems excel in providing very low access times. This is very similar to what we see in SPC-1 block storage results. There, all of the top 10 LRT systems have lots of SSDs in them. The last of the all disk SPECSFS2008 NFSv3 submissions is relegated to 17th place in this quarter’s ORT results.

As always, suggestions on how to improve any of our performance analyses are welcomed. Additionally, if you are interested in more file performance details, we now provide a fuller version (Top 20 results) of some of these charts and a set of new NFSv3 and CIFS/SMB ChampionsCharts™ in our recently updated (March, 2015), NAS Buying Guide available from our website. Also Top 20 versions of some of these charts are displayed in our recently updated (April, 2015) SAN-NAS Buying guide also purchasable from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2014.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers. ]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPECsfs2008 results available from http://www.spec.org/sfs2008/results/ as of 24Sep2014

[2] Please see the SPECSFS2008 User’s Guide available at: http://www.spec.org/sfs2008/docs/usersguide.html

Sep 292011
 

We return now to file system performance and analyze the latest SPECsfs® 2008* benchmark results. There has only been one new NFS benchmark since our last report, a HDS 3090-G2 (powered by BlueArc®) cluster run.  However, this new submission did not break into our normal throughput or ORT top 10 charts.  Also, there were no new CIFS benchmark submissions since our last report.  Let’s first correct the mistake on our last SPECsfs2008 report.

Latest SPECsfs2008 results

Scatter plot showing NFS throughput per disk in blue and CIFS throughput per disk in red, with linear regression lines drawn for both. The linear regression line for CIFS has a higher slope than the one for NFS

(SCISFS110929-001) (c) 2011 Silverton Consulting, All Rights Reserved

 

Figure 1 Scatter plot of “NFS throughput” per disk vs. “CIFS throughput” per disk

We made a mistake on our previous version of this chart and as such, have fixed and updated this chart since last time.  Mostly the difference between this version and the original is the removal of multiple EMC Isilon NFS and CIFS runs and of course the addition of the latest HDS NAS 3090-G2 NFS submission somewhere in the lower left corner.

As it turns out Isilon had been using SSDs all along in their submissions that we didn’t catch before.  Don’t know how we missed this in Isilon’s benchmark reports as they clearly indicated the use of SSDs. Be that as it may, sorry for any confusion we may have caused.

The correlations between number of disks and protocol throughput are still pretty good with 0.98 for CIFS and 0.82 for NFS.  Although without the EMC Isilon CIFS submissions we really only have 15 CIFS vs. 37 NFS submissions and the CIFS results are mainly skewed to low-end systems.  Nonetheless, the results are still pretty impressive and clearly show an advantage for CIFS, at least with respect to throughput per disk spindle deployed

Column plot of NFS throughput operations per disk spindle with Avere (2node, 6node and 1 node) taking the top 3 spots

(SCISFS110929-002) (c) 2011 Silverton Consulting, All Rights Reserved

Figure 2 Top 10 NFS throughput operations per second per disk drive

Higher is better on this chart. One can see the newest entry here, the HDS 3090-G2 BlueArc system coming in at #10.  BlueArc (and it’s former HDS OEM, now parent company) have 7 of the top slots here and Avere has the rest.  The results above seem to indicate that the HDS 3090 system is using BlueArc’s Mercury or midrange controller but looking at the reports it could just as easily have been the Titan or high-end controller.

As you may recall Avere system is a NAS virtualization engine which has other NAS boxes behind it.  One thing about this chart is that we exclude any and all SSDs or NAND based caching. But in all honesty, the Avere systems have lot’s of cache (163GB & 424GB of RAM for #1 & 2 respectively) and the latest HDS entry is no slouch either with 184GB of RAM caching spread throughout the VSP and BlueArc controllers.  We may need to establish a cutoff limit for RAM as well here.

Even though there have been no new CIFS submissions, we provide a similar chart below to Figure 2 because we have not shown one recently.

Column plot of CIFS throughput per disk spindle with Fujitsu TX 300 taking #1 and #3 slot and Apple Xserve taking second

(SCISFS110929-003) (c) 2011 Silverton Consulting, All Rights Reserved

 

Figure 3 Top 10 CIFS throughput operations per second per disk drive

In Figure 3 the Fujitsu TX300 S5 RAID 50, Apple’s Xserve, and Fujitsu TX300 S5 RAID0 (ouch!) take top honors here with 20, 65 and 20 disks respectively.  In contrast, EMCs Celerra VG8 had 280 disks and the Huawei Symantec system 1344, considerably more drives than the other entry-level systems elsewhere on this chart.

 

 

Significance

We believe CIFS is still winning in its horse race against NFS but future submissions could tip the scales either way.  More CIFS submissions, especially at the enterprise level (and without the use of SSDs) would help.

As an aside, we went looking to determine what version of SMB (CIFS) SPECsfs2008 used but there was none stated that could be found in the reports.  This probably means they use SMB 1 and not the latest SMB 2.1 (from Microsoft) which might speed it up even more. On the other hand, SPECsfs2008 clearly states that it uses NFSv3.  Unclear to us whether NFSv4 would help speed NFS throughput per disk up or not.

As always we welcome any recommendations for improvement of our SPECsfs2008 analysis.  For the discriminating storage analyst or for anyone who wants to learn more we now include a top 30 version of these and all our other charts plus further refined performance analysis in our NAS briefing which is available for purchase from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of  NAS storage system features and performance please take the time to examine our recently updated (March, 2015) NAS Buying Guide available for purchase from our website.]

~~~~

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

* SPECsfs2008 results from http://www.spec.org/sfs2008/results/


Jun 282011
 

We return now to file system performance and analyze the latest SPECsfs® 2008* benchmark results. There were nine new NFS benchmarks, five from Isilon on their new S200 hardware at various node configurations (140-, 56-, 28- 14- and 7- nodes), two from LSI Cougar 6720 at different memory configurations (34 and 50 GB), one from Huawei Symantec an 8-node, Oceanspace N8500 Clustered NAS system and one from Alacritech ANX 1500-20 with two NetApp 6070Cs as backend storage.  Both Huawei Symantec and Isilon submitted duplicate configurations under the CIFS protocol adding six new results there as well.

Latest SPECsfs2008 Results

SCISFS110628-001 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISFS110628-001 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Higher is better on this chart. Isilon took three out of the top 10 slots and Huawei Symantec came in as the new number two.  Results in this category seem to be increasingly going out of sight.

Figure 1 SPECsfs2008* Top 10 NFS throughput

Isilon’s 140-node system provides exceptional performance but the real message may be that Isilon  provides near linear performance based on node count.  For example, their fourth place result with 56-nodes achieved, ~41% of their top performer with only ~40% of the nodes.

SCISFS110628-002 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISFS110628-002 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 2 Top 10 NFSv3 ORT

Next we turn to response time.  Lower is better here.

The new Alacritech system was the top performer with an overall response time of 0.92 msec.  Also the Huawei Symantec Oceanspace system came in at number three with a .99 msec. ORT.  In addition, one can see a new LSI Cougar 6720 systems placed at number ten with 1.64 msec using 34 GB of system memory. In our last report we discussed the first NAS system to break the 1msec ORT barrier (EMC VNX VG8 with all SSD backend) and now there are two more.  While SSDs probably helped ANX it’s unclear what secret sauce Huawei is using to reach this threshold of ORT.

SCISFS110628-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

SCISFS110628-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 3 Top CIFS throughput results

Turning to CIFS performance we examine top throughput results.  The problem here is that we almost need a log scale with the top ten performers spanning from ~1.6M to 64K CIFS throughput operations/second.  Most likely this is due to paucity of mid-range and enterprise class CIFS submissions but is striking nonetheless.

As can be seen, Isilon took five out of the top ten with their scale out NAS system. Now that EMC has acquired Isilon, EMC owns eight out of the top ten CIFS performing systems.  In addition, the new Huawei Symantec NAS submission came in at number two with 712K CIFS ops/sec.

CIFS vs. NFS Performance

We return to our recurring debate regarding CIFS vs. NFS performance.  We have been told repeatedly that cannot and should not compare CIFS and NFS performance.  Nevertheless, we find it intriguing that when looking at the exact same submission with NFS and CIFS protocols, systems seem on average to provide more CIFS throughput.  As such, whenever new data surfaces we feel obligated to report how our analysis should change.  The ongoing challenge is to produce a rational argument to convince the skeptics in my audience.

Recall the last time we discussed this topic,  a recent enterprise class system had changed our viewpoint to CIFS having only a slight advantage over NFS.  Recent, results dramatically alter that conclusion.

SCISFS110628-004 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 4 Scatter plot: CIFS vs. NFS throughput operations per second

With the addition of the Huawei benchmark and Isilon’s 5 benchmarks we have more NFS and CIFS results to examine.  As such, the latest plotted regression line now shows that the systems using the CIFS protocol can do ~1.4 more operations/second than using NFS.

By examining the SpecSFS2008 user’s guide[1] one can see that the percentage of file reads and writes while similar between NFS and CIFS benchmark operational profiles are not exactly the same. According to the documentation,

  • CIFS read and write data requests represent 29.1% of overall operations (with 20.5%:8.6% R:W) and
  • NFS read and write data requests represent 27.0% of overall operations (with 18%:9% R:W)

Also, there is no information on CIFS file size and block size distributions, and thus must conclude that distribution and averages are similar between the two workloads. Given all the above, it seems evident to state that on average, the CIFS protocol provides more read/write data throughput than a similarly configured NAS system using the NFS protocol.

As further proof, please review the following charts on NFS and CIFS throughput vs. # disk drives.

SCISFS110628-005 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 5 SPECsfs2008 Scatter plot: NFS and CIFS throughput vs. disk drive count

As one can see from the respective linear regression equations, CIFS generates about ~470 throughput operations per second per disk drive and NFS only delivers ~300 throughput operations per second per spindle.  Note both charts contain data from all disk-drive only submissions for their respective protocols.  Again, given the similarity in the user workloads, we can only conclude that CIFS is a better protocol than NFS.

Significance

First, as EMC Isilon has shown, scale out NAS systems can indeed deliver linear performance that multiplies as a function of node count.  Isilon has demonstrably put this question to rest but I would wager most other scale out NAS systems could show similar linearity.

Next, with respect to CIFS vs. NFS, more data alters the measured advantage, but the story remains the same – CIFS performs better than NFS.  Obviously, this depends solely on the number of vendors that submit both NFS and CIFS benchmarks on the same hardware but we now have 12 submissions and the data seems pretty conclusive. Admittedly three of these are from Apple and five from Isilon but at least now we have multiple systems at both the entry level and enterprise level.  We could always use more and welcome future submissions to refine this analysis.

As always we welcome any recommendations for improvement of our SPECsfs2008 analysis.  We also now include a top 30 version of these and other charts plus further analysis in our NAS briefing which is available for purchase from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in June of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of SAN storage system features and performance please take the time to examine our recently updated (March, 2015) NAS Buying Guide available for purchase from our website.]

[After this went out to our newsletter readers and after we blogged about the results in the last chart we determined that EMC Isilon had been using SSDs in their testing.  We have tried to eliminate SSDs and NAND cacheing from our per disk drive spindle comparisons.  A later dispatch (released in September 2011) corrected this mistake on the last chart.  But the net result is that 1) the correlations are still pretty good, and 2) CIFS continues to exceed NFS in throughput operations per second per spindle.  Sorry for any confusion]

—–

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

* SPECsfs2008 results from http://www.spec.org/sfs2008/results/

[1] See http://www.spec.org/sfs2008/docs/usersguide.html as of 28 June 2011


Sep 232010
 

We once again return to analyze the latest SPECsfs® 2008* benchmark results. There were five new NFS benchmarks, one from EMC (Celerra VG8/VMAX), one from NEC (NV7500, 2 node) and three from Hitachi (3090 single server, 3080 single server, & 3080 cluster).  In addition, there were no new CIFS benchmarks

Latest SPECsfs2008 NFS results

(SCISFS100923-001) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100923-001) (c) 2010 Silverton Consulting, All Rights Reserved

The EMC Celerra VG8 showed up in the top 10 as the new #6, with 135K NFS throughput operations per second.  Surprisingly the new EMC result had no Flash/SSDs like their #10 result. None of the other new submissions reached into the top 10, with Hitachi’s 3080 cluster topping out at ~79K NFS throughput ops.

One should probably remember from our last analysis@ that the #1 and #3 HP submissions were blade systems with the NFS driving servers and NFS supporting servers in the same blade cabinet.  Any configuration like this may enjoy an unfair advantage utilizing faster “within the enclosure” switching rather than external fabric switching

None of the new NFS submissions broke into the top ten on NFS Operational Response Time so refer to our previous analysis if you want to see those results.

(SCISFS100923-002) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100923-002) (c) 2010 Silverton Consulting, All Rights Reserved

Above is a new chart for SPECsfs2008 analysis.  We show here a scatter plot of NFS throughput operations per second against the solutions total number of disk drives.  Systems with relatively more effective utilization of disk drives show up above the linear regression line and poorer ones below.  To be fair we excluded out any system that contained Flash Cache or SSDs, namely NetApp with PAM and EMC Celerra with SSDs.

Hard to identify specific subsystems here but systems with the top non-Flash/SSD NFS throughput operations are trackable.  The top three had over 330K, over 170K and over 160K NFS throughput operations per second respectively.  The latest EMC Celerra submission had 280 disk drives, the latest NEC had 287 drives and the three Hitachi submissions had 164, 162 and 82 drives for the 3090, 3080 cluster, and 3080 single server submissions respectively.

CIFS analysis

Figure 3 below is a similar chart for CIFS results.  Realize we have under half as many data points for CIFS as we have for NFS.  As a result, the regression coefficient is pretty loose at R**2=~0.5 vs. the ~0.8 for NFS.  Also, it’s hard not to notice that  few submissions do better than average in CIFS throughput vs. disk drives.  Once again, SSD usage or Flash Cache submissions were removed from this analysis for fairness.

(SCISFS100923-003) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100923-003) (c) 2010 Silverton Consulting, All Rights Reserved

I attribute Figure 3’s lack of correlation to the relatively low-end capability of the systems represented here.  A majority of these systems sustained below 10K CIFS throughput operations per second whereas more than half of the NFS submissions were over 50K NFS throughput operations per second.  These relatively low-end CIFS systems probably do not perform as efficiently as higher end systems, especially with respect to disk drives.

Significance

Our scorecard for SPECsfs 2008 submissions now stands at 36 NFS vs. 15 CIFS results.  Such skewed submissions seem unwarranted given CIFS preponderance in the marketplace but keep those NFS submissions coming in as well.

Our new throughput ops vs. number of disk drives analysis was added per request for other performance analyses and are now available for SPECsfs2008. It shows that CIFS systems have room to improve their use of disk drives while NFS systems show a reasonable spread of disk use efficiency with superior results readily obtainable.

This performance dispatch was originally sent out to our newsletter subscribers in September of 2010.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of NAS system features and performance please take the time to examine our NAS Briefing available for purchase from our website.

A PDF version of this can be found at

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community


* SPECsfs2008 results from http://www.spec.org/sfs2008/results/

 

@ Available at http://silvertonconsulting.com/cms1/dispatches/


Mar 312010
 

We now turn to analysis of the latest SPECsfs® 2008* benchmark results. Fortunately there were three new NFS and CIFS benchmarks over the last quarter, including two for EMC Celerra NS-G8 with a V-Max backend and one for Panasas.  But the most exciting item is that EMC benchmarked their Celerra system in both NFS and CIFS protocols.  Now that we have our 6th combined result we can revisit my contention that CIFS has better throughput than NFSv3.

Latest SPECsfs2008 CIFS results

(SCISFS100317-001) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100317-001) (c) 2010 Silverton Consulting, All Rights Reserved

You may recall the last time we discussed this topic we claimed that CIFS had ~2X the throughput of NFSv3.  That was based on the first 5 results in the bottom left quadrant of this chart (see Figure 1).  With EMC’s latest CIFS result we must change this claim.

As shown above, CIFS and NFSv3 throughput were roughly the same for EMC (110K for NFS vs. 118K for CIFS).  Hence, our regression equation has changed significantly and now shows that CIFS throughput is roughly equal (0.99 multiplier) to NFSv3, with the addition of a 10.5K constant for CIFS.  More results would obviously help, but the results clearly show we were wrong to say

that CIFS had twice the throughput of NFS and now say that CIFS only has a slight advantage when compared to NFSv3 operations.

Of course, everyone we talked with thought we were wrong to compare the two at all.  They all said the two workloads represent completely different protocols, not the least of which that CIFS was state-full and NFSv3 stateless.  As such, they should not be compared.  Nonetheless, I still maintain that these two can be usefully compared and will continue to do so.

(SCISFS100317-002) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100317-002) (c) 2010 Silverton Consulting, All Rights Reserved

We next review the latest CIFS throughput results.  Viewing Figure 2 above, EMC’s Celerra NS-G8/V-Max wins the top spot with no competition whatsoever.  One must realize there have only been somewhat limited CIFS results thus far, and EMC is probably the only tier one system having submitted one to date. Notwithstanding all that, I would say this is a pretty impressive result for EMC.

(SCISFS100317-003) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100317-003) (c) 2010 Silverton Consulting, All Rights Reserved

We have decided to add another chart in our ongoing quixotic comparison of CIFS vs. NFSv3. This time we focus on operational response time or ORT (see Figure 3 above). ORT is the mean response time during the entire duration of the benchmark activity.  As the chart shows, CIFS has a much better response time than NFSv3.  Realize ORT was measured for similar (see discussion above) throughput activity and the R**2 was only ~0.7.

The argument that we are measuring two different protocols probably holds more weight when comparing ORT. State-fullness can only help CIFS in any ORT comparison.  Also, The exact number of non-data transfer operations to data transfer operations may not be that different between the CIFS and NFSv3 workloads but the style and responses to the non-data transfer operations are significantly different between the two.

Next we turn to CIFS absolute ORT result and here one can see the continued dominance of some of the earlier results as EMC Celerra comes in at #5 in the top 10 (see Figure 4 below).

(SCISFS100317-004) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100317-004) (c) 2010 Silverton Consulting, All Rights Reserved

As we have discussed before, ORT results tend to be a transaction-oriented measurement and as such, provides a useful complement to SPECsfs’ other throughput results.

Latest SPECsfs2008 NFS results

There were two new NFS benchmarks, one from Panasas and another EMC Celerra NS-G8/V-Max run.  Both these results broke into the top 10 in throughput but neither altered the Top 10 ORT results for NFSv3.

(SCISFS100317-005) (c) 2010 Silverton Consulting, All Rights Reserved

(SCISFS100317-005) (c) 2010 Silverton Consulting, All Rights Reserved

In Figure 5, one can see the Celerra NS-G8 came in at #7 and the Panasas result came in at #9 in the top ten.  At this point, just about every top tier NAS system vendor has submitted at least one result for SPECsfs 2008 NFSv3.

Significance

Our score card for SPECsfs 2008 submissions now stands at 29 NFSv3 vs. 12 CIFS results.  Given the preponderance of CIFS usage in the field this seems more skewed than necessary.  I would encourage more vendors to submit CIFS results to address this unbalance.  Also, where it makes sense, be sure to include an NFSv3 result as well.  It will definitely help clarify my CIFS vs. NFSv3 comparisons and who knows it may prove me wrong yet again.

Nevertheless, it’s good to see more top end systems submitting SPECsfs 2008 results of any kind.  One can only hope that such submissions encourage more vendors to act.

This performance dispatch was originally sent out to our newsletter subscribers in March of 2010.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of NAS system features and performance please take the time to examine our recently updated (March, 2015)  NAS Buying guide available for purchase from our website.

A PDF version of this dispatch can be found at

SCI 2010 March 31 Latest SPECsfs(R) 2008 performance results analysis

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.


* SPECsfs2008 results from http://www.spec.org/sfs2008/results/


Jun 252009
 

We now turn to analysis of the new SPECsfs®2008* benchmark results. Unfortunately there were not a lot of highend SPECsfs2008 results, most notably ONStor and Apple for NFS, and Apple and Fujitsu Siemens for CIFS.

Latest SPECsfs2008 results

(SCISFS090625-001) 2009 (c) Silverton Consulting, All Rights Reserved

(SCISFS090625-001) 2009 (c) Silverton Consulting, All Rights Reserved

ONStor Cougar now takes second spot in the top 10 throughput results.  The Cougar system had ~½ the disks of the ExaStore box and ~7 times less memory (cache).  Given all that, its results standup pretty well.  The two new Apple NFS benchmark results (Snow Leopard and Leopard server) round out the rest of the new members to the top 10 list at numbers 7 and 10 respectively.

Recall from our last report# that some NetApp results utilized their PAM card. Also, the SGI product result used Infiniband, both ExaStore benchmarks used10GbE and all the rest  use GigE.  In all fairness the networking connection may not be a limiting factor in SPECsfs2008 results.

(SCISFS090625-002) 2009 (c) Silverton Consulting, All Rights Reserved

(SCISFS090625-002) 2009 (c) Silverton Consulting, All Rights Reserved

As discussed last time for NFS ORT results, one can clearly see the advantage of NetApp’s PAM with FC disks and yet, the new ONStor Cougar benchmark shows up at number 3, only ~60 microsec behind the NetApp/PAM result.  The only other new showing was Apple’s Snow Leopard server coming in at number 9.

Next we turn to CIFS results, the five new results have more than doubled SPECsfs2008 CIFS benchmarks.   Recall the SGI is using Infiniband while all the others use GigE interfaces.

(SCISFS090625-003) (c) 2009 Silverton Consulting, All Rights Reserved

(SCISFS090625-003) (c) 2009 Silverton Consulting, All Rights Reserved

We suppose it’s not surprising to see Apple’s Snow Leopard leading the pack, coming in at the new #1 in CIFS throughput considering its market place but one would think others could do better.  More impressive is that the Snow Leopard result used only 65 disks whereas the SGI result sported 242 disks (~4X).  It’s unclear to us whether this is the new Apple OEM of Sun ZFS file system at work here, but clearly Apple CIFS performance has improved significantly.

(SCISFS090625-004) 2009 (c) Silverton Consulting, All Rights Reserved

(SCISFS090625-004) 2009 (c) Silverton Consulting, All Rights Reserved

Once again, Apple shows up well in CIFS ORT results.  Although, as best we can determine this #1 result was an early Leopard version (Mac OSX10.5.1) whereas the #3 result (using Mac OSX10.5.7) had a 2.93Ghz Nehalem processor.  The other major difference was a dual port GigE card for the #1 result vs. a 6-port GigE card in the slower version.

(SCISFS090625-005) 2009 (c) Silverton Consulting, All Rights Reserved

(SCISFS090625-005) 2009 (c) Silverton Consulting, All Rights Reserved

Figure 5 SPECsfs2008* CIFS vs. NFS throughput correlation

We have discussed this in earlier reports but once again the results would support our contention that the CIFS protocol results in better throughput than NFSv3.  As pointed out to me, a couple of provisos are warranted here, namely:

  • NFS workloads are not readily comparable to CIFS in a number of dimensions not the least of which is that NFS is stateless and CIFS is state-full.   Also, the relative proportions of the actual workloads don’t exactly matchup, e.g. percentages for NFS read and write operations versus CIFS read_andx and write_andx operations are slightly different (NFS read@18% vs. CIFS read_andx@20.5% and NFS write@10% vs. CIFS write_andx@8.6%), file sizes are different, and all the remaining operations, which, to be fair, represent a significant majority of their respective workloads, are by definition, nigh impossible to compare. SPECsfs benchmarks for the two are implemented to reflect all of these differences.
  • A majority of these results (3 of 5) come from the same vendor (Apple) and their great CIFS and/or poor NFS implementations may be skewing results.
  • Only five subsystems have recorded results for both interfaces but the correlation looks pretty good for now.
  • Normally, host operating system affects could skew these results but the SPECsfs2008 benchmarks emulate their own client side stacks for both protocols, thus negating any operating system affects.

Nonetheless, once again, considering that at the user level all specific protocol details result in emulating comparable end-user workloads, the results do show a significant advantage for CIFS (~2.4X) throughput over NFS.

Significance

Our earlier discussion on CIFS vs. NFSv3 throughput differences resulted in quite a lot of discussion.  It was early then, and still is now, but we continue to stand by our claim, given benchmark results, CIFS seems to perform better than NFSv3.  More dual protocol results should help clarify this relationship.

Slowly, more SPECsfs2008 results are being released.  But, where are the major NAS systems.  It’s been 10 months since the old SPECsfs benchmark was retired and we still lack benchmark results for all the major NAS systems.  In the mean time, smaller players continue to release results; just happy to get any visibility, validity and traction they can muster.

This performance dispatch was originally sent out to our newsletter subscribers in June of 2009.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of NAS system features and performance please take the time to examine our NAS Briefing available for purchase from our website.

A PDF version of this can be found at

SCI 2009 June25 Update to SPECsfs® 2008 performance results

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

* SPECsfs2008 results from http://www.spec.org/sfs2008/results/

# Available at http://www.silvertonconsulting.com/page2/page2d/storage_int_dispatch.html