SCI 2011Dec21 Latest SPECsfs2008* NFS systems results analysis

In Alacritech ANX-1500-20, Apple, Apple Xserve, Avere, BlueArc, Data ONTAP 8.1, FAS/V6240, FXT3500, Hitachi NAS, Isilon, Mercury, NetApp, NFS throughput, ORT, S200, SPECsfs, SPECsfs2008, VG8, VNX by AdministratorLeave a Comment

We return now to file system performance and analyze the latest SPECsfs® 2008* benchmark results. There have been a number of NFS submissions from NetApp using their Data ONTAP 8.1 with various configurations of clustered FAS6240 (4-, 8-, 12-, 16-, 20- & 24-node) and Avere FXT 3500 in a 44-node configuration. There were no new CIFS results.

Latest SPECsfs2008 results

Column chart showing the top 10 NFS ops/sec subsystem results

(SCISFS111221-001) (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Here one can see almost all of the new results starting with the 44-node Avere FXT3500 at #1 and the NetApp 24-node FAS6240 at #2 all the way down to the NetApp 8-node FAS 6240 at #8.  This is the first time we have seen NetApp’s clustered operations (C-mode) benchmarks since Spinnaker days on SpecSFS97.  It appears to perform much better than before and shows up well against the competition from EMC Isilon and Avere.

Figure 1 Top 10 NFS throughput operations per second

Clustered NAS systems dominate NFS throughput op/sec. In fact there are no, non-clustered systems on this chart.  The node counts on this chart range from 140-node EMC Isilon system (#4) down to a 4-X blade (with 1 standby) EMC VNX VG8/VNX5700 (#9).

The other factor not readily apparent in the above is the amount of SSD, Flash Cache or DRAM cache used by these systems.  For example, the 44-node Avere FXT 3500 had 6.8TB of DRAM cache (plus 800GB of SSD boot volumes), the 24-node NetApp FAS6240 had 13.5TB of Flash Cache (12TB) and DRAM (1.5TB) cache and the 140-node EMC Isilon system had 6.8TB of DRAM cache and 25TB of backend SSDs.  Almost as much DRAM/Flash cache/SSD as capacity supplied by SMB storage systems.

Column chart showing top 10 NFS ORT (response time) results

(SCISFS111221-002) (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 2 Top 10 NFS ORT results

Lower is better on the ORT chart.  Recall that Overall Response Time (ORT) is an average of response times at a set series of throughput levels during the benchmark run.  Similar to throughput results discussed previously, clustered systems also seem to be providing top response time results.  Only two systems on this chart, the #1 Alcratech ANX 1500-20 and the #8 Apple system are monolithic systems, with all the rest being scale-out, clustered NAS devices.

Why the 16-node NetApp FAS6240 system placed better than any of its multi-node brethren is a mystery but it could be noise.  The “slowest” NetApp C-mode run (20-node system) had a 1.56 msec. ORT whereas the #10 result here had a 1.48 msec. ORT.  A difference of only 80 µsec. is almost too small to measure accurately.

Column chart showing top 10 NFS throughput op/sec per disk spindle

(SCISFS111221-003) (c) 2011 Silverton Consulting, Inc., All Rights Reserved

Figure 3 Top 10 NFS throughput operations per second per disk drive

Another way to view NFS throughput operations is to normalize it over the number of disk drives.  Note, for this chart we exclude any system that uses Flash Cache or data SSDs for data/meta-data storage (Avere systems use SSDs as boot volumes).  The latest Avere 44-node FXT3500 came in as #1 performer in NFS throughput ops/disk followed by prior Avere runs and BlueArc/HDS Mercury system runs.  As discussed previously, although the Avere #1 system had no data SSDs, it did have almost 7TB of DRAM.

Significance

It’s great to see NetApp’s Data ONTAP 8.1 C-mode benchmarks.  Also, the 24-node FAS6240 system result provides a significant proof point for NetApp’s clustering.

The fact that clustered, scale-out NAS systems have come to dominate throughput results probably indicates a need to rethink our throughput charts.  We probably need to break out scale-out, clustered systems from monolithic systems.  Also the immense DRAM memory, Flash cache, and/or SSD storage present in these systems indicates that we should somehow incorporate memory/Flash cache/SSD size as an additional normalizer for throughput activity.

Also, our throughput per disk drive count chart needs some changes.  Although we have always excluded Flash cache and data SSD systems from this chart, the all DRAM systems seem to hold an unfair advantage here.  We may need to exclude some level of DRAM caching as well as exclude data SSD and Flash cache use from this chart.

As always we welcome any recommendations for improvement of our SPECsfs2008 analysis.  For the discriminating storage admin or for anyone who wants to learn more, we now include a additional results and all our other SPECsfs2008 charts in our recently update NAS Buying Guide available for purchase on our website.

[This performance dispatch was originally sent out to our newsletter subscribers in December of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more, in-depth analysis of NAS storage system features and performance please see our recently revised (April 2019) NAS Buying Guide available on our website.]

~~~~

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community  

* SPECsfs2008 results from http://www.spec.org/sfs2008/results/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.