SPECsfs2008 NFS SSD/NAND performance, take two – chart-of-the-month

SCISFS120623-010(002) (c) 2012 Silverton Consulting, Inc. All Rights Reserved

For some time now I have been experimenting with different approaches to normalize IO activity (in the chart above its NFS throughput operations per second) for systems that use SSDs or Flash Cache.  My previous attempt  (see prior SPECsfs2008 chart of the month post) normalized base on GB of NAND capacity used in a submission.

I found the previous chart to be somewhat lacking so this quarter I decided to use SSD device and/or Flash Cache card count instead.  This approach is shown in the above chart. Funny thing, although the rankings were exactly the same between the two charts one can see significant changes in the magnitudes achieved, especially in the relative values, between the top 2 rankings.

For example, in the prior chart Avere FXT 3500 result still came in at number one but whereas here they achieved ~390K NFS ops/sec/SSD on the prior chart they obtained ~2000 NFS ops/sec/NAND-GB. But more interesting was the number two result. Here the NetApp FAS6240 with 1TB Flash Cache Card achieved ~190K NFS ops/sec/FC-card but on the prior chart they only hit ~185 NFS ops/sec/NAND-GB.

That means on this version of the normalization the Avere is about 2X more effective than the NetApp FAS6240 with 1TB FlashCache card but in the prior chart they were 10X more effective in ops/sec/NAND-GB. I feel this is getting closer to the truth but not quite there yet.

We still have the problem that all the SPECsfs2008 submissions that use SSDs or FlashCache also have disk drives as well as (sometimes significant) DRAM cache in them.  So doing a pure SSD normalization may never suffice for these systems.

On the other hand, I have taken a shot at normalizing SPECsfs2008 performance for SSDs-NAND, disk devices and DRAM caching as one dimension in a ChampionsChart™ I use for a NAS Buying Guide, for sale on my website.  If your interested in seeing it, drop me a line, or better yet purchase the guide.

~~~~

The complete SPECsfs2008 performance report went out in SCI’s June newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well).  However, you can get the SPECsfs2008 performance analysis now and subscribe to future free newsletters by just using the signup form above right.

For a more extensive discussion of current NAS or file system storage performance covering SPECsfs2008 (Top 20) results and our new ChampionsChart™ for NFS and CIFS storage systems, please see SCI’s NAS Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPECsfs2008 results or any of our other storage performance analyses.


Analyzing SPECsfs2008 flash use in NFS performance – chart-of-the-month

(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved
(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved

For some time now I have been using OPS/drive to measure storage system disk drive efficiency but have so far failed to come up with anything similar for flash or SSD use.  The problem with flash in storage is that it can be used as a cache or as a storage device.  Even when used as a storage device under automated storage tiering, SSD advantages can be difficult to pin down.

In my March newsletter as a first attempt to measure storage system flash efficiency I supplied a new chart shown above, which plots the top 10 NFS throughput ops/second/GB of NAND used in the SPECsfs2008 results.

What’s with Avere?

Something different has occurred with the (#1) Avere FXT 3500 44-node system in the chart.   The 44-node Avere system only used ~800GB of flash as a ZIL (ZFS intent log from the SPECsfs report).   However, the 44-node system also had ~7TB of DRAM across their 44-node system, most of which was used for file IO caching.  If we incorporated storage system memory size with flash GB in the above chart it would have dropped the Avere numbers by a factor of 9 while only dropping the others by a factor of ~2X which would still give the Avere a significant advantage but not quite so stunning.  Also, the Avere system frontends other NAS systems, (this one running ZFS) so it’s not quite the same as being a direct NAS storage system like the others on this chart.

The remainder of the chart (#2-10) belongs to NetApp and their FlashCache (or PAM) cards.  Even Oracles Sun ZFS Storage 7320 appliance did not come close to either the Avere FXT 3500 system or the NetApp storage on this chart.  But there were at least 10 other SPECsfs2008 NFS results using some form of flash but were not fast enough to rank on this chart.

Other measures of flash effectiveness

This metric still doesn’t quite capture flash efficiency.  I was discussing flash performance with another startup the other day and they suggested that SSD drive count might be a better  alternative.  With such a measure, it would take into consideration that each SSD has a only a certain performance level it can sustain, not unlike disk drives.

In that case Avere’s 44-node system had 4 drives, and each NetApp system had two FlashCache cards, representing 2-SSDs per NetApp node.  I try that next time to see if it’s  a better fit.

~~~~

The complete SPECsfs2008 performance report went out in SCI’s March newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the SPECsfs performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current NAS or file system storage performance covering SPECsfs2008 (Top 20) results and our new ChampionChart™ for NFS and CIFS storage systems, please see SCI’s NAS Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPECsfs2008 results or any of our other storage performance analyses.


Latest SPECsfs2008 results, over 1 million NFS ops/sec – chart-of-the-month

Column chart showing the top 10 NFS througput operations per second for SPECsfs2008
(SCISFS111221-001) (c) 2011 Silverton Consulting, All Rights Reserved

[We are still catching up on our charts for the past quarter but this one brings us up to date through last month]

There’s just something about a million SPECsfs2008(r) NFS throughput operations per second that kind of excites me (weird, I know).  Yes it takes over 44-nodes of Avere FXT 3500 with over 6TB of DRAM cache, 140-nodes of EMC Isilon S200 with almost 7TB of DRAM cache and 25TB of SSDs or at least 16-nodes of NetApp FAS6240 in Data ONTAP 8.1 cluster mode with 8TB of FlashCache to get to that level.

Nevertheless, a million NFS throughput operations is something worth celebrating.  It’s not often one achieves a 2X improvement in performance over a previous record.  Something significant has changed here.

The age of scale-out

We have reached a point where scaling systems out can provide linear performance improvements, at least up to a point.  For example, the EMC Isilon and NetApp FAS6240 had a close to linear speed up in performance as they added nodes indicating (to me at least) there may be more there if they just throw more storage nodes at the problem.  Although maybe they saw some drop off and didn’t wish to show the world or potentially the costs became prohibitive and they had to stop someplace.   On the other hand, Avere only benchmarked their 44-node system with their current hardware (FXT 3500), they must have figured winning the crown was enough.

However, I would like to point out that throwing just any hardware at these systems doesn’t necessary increase performance.  Previously (see my CIFS vs NFS corrected post), we had shown the linear regression for NFS throughput against spindle count and although the regression coefficient was good (~R**2 of 0.82), it wasn’t perfect. And of course we eliminated any SSDs from that prior analysis. (Probably should consider eliminating any system with more than a TB of DRAM as well – but this was before the 44-node Avere result was out).

Speaking of disk drives, the FAS6240 system nodes had 72-450GB 15Krpm disks, the Isilon nodes had 24-300GB 10Krpm disks and each Avere node had 15-600GB 7.2Krpm SAS disks.  However the Avere system also had a 4-Solaris ZFS file storage systems behind it each of which had another 22-3TB (7.2Krpm, I think) disks.  Given all that, the 16-node NetApp system, 140-node Isilon and the 44-node Avere systems had a total of 1152, 3360 and 748 disk drives respectively.   Of course, this doesn’t count the system disks for the Isilon and Avere systems nor any of the SSDs or FlashCache in the various configurations.

I would say with this round of SPECsfs2008 benchmarks scale-out NAS systems have come out.  It’s too bad that both NetApp and Avere didn’t release comparable CIFS benchmark results which would have helped in my perennial discussion on CIFS vs. NFS.

But there’s always next time.

~~~~

The full SPECsfs2008 performance report went out to our newsletter subscriber’s last December.  A copy of the full report will be up on the dispatches page of our site sometime later this month (if all goes well). However, you can see our full SPECsfs2008 performance analysis now and subscribe to our free monthly newsletter to receive future reports directly by just sending us an email or using the signup form above right.

For a more extensive discussion of file and NAS storage performance covering top 30 SPECsfs2008 results and NAS storage system features and functionality, please consider purchasing our NAS Buying Guide available from SCI’s website.

As always, we welcome any suggestions on how to improve our analysis of SPECsfs2008 results or any of our other storage system performance discussions.

Comments?