We once again return to analyze the latest Standard Performance Evaluation Corporation Network (System) File System 2008* (SPECsfs® 2008) benchmark results. There were three new NFS benchmarks, two from NetApp (FAS6240 with 1TB Flash Cache and FAS3270) and one from LSI [ONStor] (Cougar 6720). In addition, there were two new CIFS benchmarks one from EMC (Celerra VG8 with VMAX) and one from NetApp (FAS3210 with 512GB Flash Cache).
Latest SPECsfs2008 NFS results
Figure 1 SPECsfs2008* NFS throughput vs. memory size
We introduce a new chart for our SPECsfs2008 analysis a throughput vs. memory size scatter plot. For some reason I was very intrigued by this chart when I first created it. Originally all the data was together in one data series with only one linear trend line. I decided to break the data out into two groups, those systems using DRAM caching plus SSDs or NAND caching (6 systems) and those systems using only DRAM caching (33 systems).
After splitting the two types of systems out, I understood better what was shown here. There appears to be a distinct difference in the throughput gained from DRAM caching systems versus SSD use or NAND caching systems. Of course what’s missing from these charts is any comparison of pricing (because it’s not supplied in SPECsfs reports). Also the sample size is very small for the SSD/NAND caching systems and this may skew results.
Figure 2 SPECsfs2008* NFS throughput operations per second
The other issue with this chart is that SSD use is not accounted for in the memory quantity (just like SPECsfs2008). Adding SSD usable capacity to the memory in a system changed this chart significantly as the SSD system used ~19TB of SSD. Nonetheless, from this chart it seems clear that DRAM caching offers better throughput performance for the same amount of memory. Given today’s limited sample size we cannot discern any statistical difference between NAND cache system versus SSD. This analysis must until more SSD and NAND caching systems report in.
NetApp’s FAS6240 showed up as the new #2 in our top 10 NFS throughput operations per second, the only new submission on this chart. The other thing about the FAS6240 was it use of SAS disks. There haven’t been a lot of NFS results using SAS disk devices so the results here are encouraging.
Figure 3 SPECsfs2008* NFS top ORT results
Here we show the top 10 NFS ORT (overall response time results). Recall that ORT is a median-like response time over the whole benchmark duration. One can see all of the new NFS benchmarks in this chart. The NetApp 6240 showed up as the #1 with an ORT of 1.17 msec., the FAS 3270 came in at #7 with an ORT of 1.66 msec, and the LSI (ONStor) Cougar 6720 came in at #9 with an ORT of 1.67 msec.
I would have expected the NAND Cached FAS6240 to do well with ORT but the Avere Systems continue to amaze. However I must say that the Avere systems at #2, 3 and 5 may enjoy some advantage due to their relatively large DRAM caches (98, 163, and 424 GB respectively).
Below we report on top CIFS throughput results which include the latest submissions from EMC and NetApp.
Figure 4 SPECsfs2008* CIFS top throughput results
EMC’s Celerra VG8 came in as the new #1 with ~143K CIFS throughput operations per second and the NetApp FAS3210 came in at #3 with ~65K CIFS throughput operations per second. The strange thing about the top three results is that the #2 and #3 results used SSDs and NAND cache respectively but the #1 result just had a lot of disks (312). As we have said in the past, SSDs or NAND cache can substitute and perform just as well as more spindles if you don’t need the capacity.
Figure 5 Top CIFS ORT results
Here we show the top 10 CIFS ORT (overall response time) results. Similar to NFS ORT results, CIFS ORT is a median-like response time over the whole benchmark duration. Both new submissions showed up in the top 10, NetApp FAS3210 at #4 and EMC Celerra VG8 at #9. Not surprising is the fact that 3 out of the top 4 ORT systems used FlashCache. Apple still holds the coveted #1 spot with an ORT of 1.22msec., but others are starting to encroach.
I struggled to understand the NFS memory size vs. throughput scatter plot chart. It should have been plain to see but it was unclear until I split out the data into two series. The fact that DRAM provides better throughput than NAND or SSDs is pretty noticeable, but lacking cost information, it’s impossible to compare cost effectiveness.
As always we welcome any recommendations for improvement of our SPECsfs2008 analysis. We also now include a top 30 version of these charts and other charts plus further analysis in our NAS briefing which is available for purchase from our website.
This performance dispatch was originally sent out to our newsletter subscribers in December of 2010. If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports. Also, if you need an even more in-depth analysis of NAS system features and performance please take the time to examine our NAS Briefing available for purchase from our website.
A PDF version of this can be found atSCI 2010 Dec 14 latest SPECsfs2008 benchmark results analysis for NFS and CIFS
Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community