SCI 2012Mar16 Latest SPECsfs2008 performance results

In Avere, Cluster Mode, Data ONTAP 8.1, FAS/V6240, FXT 2500, Hitachi NAS, Hitachi Vantara, HNAS 3090-G2, NetApp, NFS throughput, Oracle, SPECsfs, SPECsfs2008, Sun ZFS Storage 7320by AdministratorLeave a Comment

We return now to file system performance and analyze the latest SPECsfs® 2008* benchmark results. There have been three new NFS submissions since our December report, two from Hitachi on their HNAS 3090-G2, one with 2-nodes and the other with a single node both using the new performance accelerator feature and the other from Oracle on their Sun ZFS Storage 7320 appliance.  Once again, there were no new CIFS/SMB results.

Latest SPECsfs2008 results

SCISFS120316-001 (c) 2012 Silverton Consulting, Inc., All Rights Reserved

SCISFS120316-001 (c) 2012 Silverton Consulting, Inc., All Rights Reserved

Figure 1 Top 10 NFS throughput operations/disk

Higher is better.  New results on this chart were the two HNAS 3090-G2 systems with the performance accelerator coming in at #4 and 7.  With the exception of the first 4 Avere systems every system on this chart is based on BlueArc technology. You would have to go down to the 14th place for this metric before you would find a non-Avere or non-BlueArc system.  BlueArc probably does well on throughput/disk due to their hardware accelerated NAS services.

On the other hand, as to why Avere does so well here is a bit of a mystery.  Our first thought is memory but their 1-, 2- and 6-node systems didn’t have that much DRAM, coming in at 98, 163 and 424GB respectively.  In comparison, the two new Hitachi

HNAS systems had 540GB and 568GB, respectively.  Maybe why the Avere FXT 2500 systems do so well on throughput/disk is due to their better caching. It’s certainly not hardware, because they use commodity servers.

Of course any SPECsfs2008 submission that used SSDs or FlashCache has been eliminated from Figure 1 to insure a fair comparison.  But there are more SSD submissions coming in all the time, including the new Oracle Sun ZFS appliance.

SCISFS120316-002 (c) 2012 Silverton Consulting, Inc., All Rights Reserved

SCISFS120316-002 (c) 2012 Silverton Consulting, Inc., All Rights Reserved

Figure 2 Top 10 NFS ops/sec per GB of Flash

Accordingly, this is our first attempt at trying to do something similar to Figure 1 for those subsystems that use flash.  Higher is better on this chart.

Clearly, something is going on with the Avere FXT 3500, it has a rather modest amount of flash (800GB) but is generating almost 1.6M NFS throughput operations per second.  As far as I can tell the Avere FXT 3500 used the flash as a ZFS Intent Log (ZIL) device but still this is pretty phenomenal performance as compared to the rest of the field.  However, they also do well on throughput/disk, so there is clearly an advantage at work here.  However, I would add that the 44-node Avere system had ~9TB of DRAM cache which might help explain their superior result here.

The remainder of this chart belongs to NetApp, using various versions of Flash Cache, but typically not exceeding more than a TB of Flash per node.  The middle group (#3-8) all were running Data ONTAP 8.1 in C-mode whereas the others were running prior versions in 7-mode.

Not sure I am happy with this metric just yet, but I thought I would throw it out to see what others think.  For Figure 2 any SPECsfs2008 submission that did not contain Flash Cache or SSDs was excluded.

SCISFS120316-003 (c) 2012 Silverton Consulting, Inc., All Rights Reserved

SCISFS120316-003 (c) 2012 Silverton Consulting, Inc., All Rights Reserved

Figure 3 Scatter Plot: NFS Throughput/disk spindle count

Here we see another cut at NFS throughput per disk, but this time plots all NFS results.  A regression coefficient (R**2) of ~0.82 is pretty good and clearly shows that on average the more disks in a SPECsfs2008 submission the better the throughput performance.

There are a couple of outliers here, most notably the one point at around ~630K NFS ops/sec which was an 8-node Huawei Symantec Oceanspace N8500 cluster NAS.   The other two outliers were the new Hitachi HNAS 3090-G2 2-node cluster at ~200K NFS ops/sec and around 400 drives and the 6-node Avere FXT2500 at ~130K NFS ops/sec with ~80 drives.

Significance

None of the other NFS or CIFS Top 10 charts had any changes so look to our prior reports to see those results.

I have been searching for a way to measure flash effectiveness and I think this may be it.  The NFS ops/GB of Flash seems a great way to normalize flash performance.  SPECsfs2008 flash use is pretty diverse. For instance, we have Avere’s ZIL usage, NetApp and others using NAND as a cache extension, other systems having intermixes of SSD and disk with auto-tiering and a few SSD-only systems.  However today, given our new metric, ZIL and NAND caching look hard to beat.

On the other hand, there may be a better way to measure performance that combines disk and Flash use together. Nothing obvious comes to mind but if you have any ideas, please drop me a line.

As always we welcome any recommendations for improvement of our SPECsfs2008 analysis.  For the discriminating storage admin or anyone else who wants to learn more about NAS storage performance, we now include a additional results (Top 20) and all SPECsfs2008 charts in our NAS Buying Guide available for purchase from SCI’s website.

~~~~

[This performance dispatch was originally sent out to our newsletter subscribers in March of 2012.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more, in-depth analysis of NAS storage system features and performance please see our NAS Buying Guide available on our website.]

~~~~

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community  

* SPECsfs2008 results from http://www.spec.org/sfs2008/results/


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.