SpecSFS2008 results NFS throughput vs. flash size – Chart of the Month

Scatter plot with SPECsfs2008 NFS throughput results against flash size, SSD, NFS thoughputThe above chart was sent out in our December newsletter and represents yet another attempt to understand how flash/SSD use is impacting storage system performance. This chart’s interesting twist is to try to categorize the use of flash in hybrid (disk-SSD) systems vs. flash-only/all flash storage systems.

First, we categorize SSD/Flash-only (blue diamonds on the chart) systems as any storage system that has as much or more flash storage capacity than SPECsfs2008 exported file system capacity. While not entirely true, there is one system that has ~99% of their exported capacity in flash, it is a reasonable approximation.  Any other system that has some flash identified in it’s configuration is considered a Hybrid SSD&Disks (red boxes on the chart) system.

Next, we plot the system’s NFS throughput on the vertical axis and the system’s flash capacity (in GB) on the horizontal axis. Then we charted a linear regression for each set of data.

What troubles me with this chart is that hybrid systems are getting much more NFS throughput performance out of their flash capacity than flash-only systems. One would think that flash-only systems would generate more throughput per flash GB than hybrid systems because of the slow access times from disk. But the data shows this is wrong?!

We understand that NFS throughput operations are mostly metadata file calls and not data transfers so one would think that the relatively short random IOPS would favor flash only systems. But that’s not what the data shows.

What the data seems to tell me is that judicious use of flash and disk storage in combination can be better than either alone or at least flash alone.  So maybe those short random IOPS should be served out of SSD and the relatively longer, more sequential like data access (which represents only 28% of the operations that constitute NFS throughput) should be served out of disk.  And as the metadata for file systems is relatively small in capacity, this can be supported with a small amount of SSD, leveraging that minimal flash capacity for the greater good (or more NFS throughput).

I would be remiss if I didn’t mention that there are relatively few (7) flash-only systems in the SPECsfs2008 benchmarks and the regression coefficient is very poor (R**2=~0.14), which means that this could change substantially with more flash-only submissions. However, it’s looking pretty flat from my perspective and it would take an awful lot of flash-only systems showing much higher NFS throughput per flash GB to make a difference in the regression equation

Nonetheless, I am beginning to see a pattern here in that SSD/Flash is good for some things and disk continues to be good for others. And smart storage system developers will do good to realize this fact.  Also, as a side note, I am beginning to see some rational why there aren’t more flash-only SPECsfs2008 results.

Comments?

~~~~

The complete SPECsfs2008 performance report went out in SCI’s December 2013 newsletter.  But a copy of the report will be posted on our dispatches page sometime this quarter (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free newsletters by just using the signup form above right.

Even more performance information and ChampionCharts for NFS and CIFS/SMB storage systems are also available in SCI’s NAS Buying Guide, available for purchase from  website.

As always, we welcome any suggestions or comments on how to improve our SPECsfs2008  performance reports or any of our other storage performance analyses.

 

Analyzing SPECsfs2008 flash use in NFS performance – chart-of-the-month

(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved
(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved

For some time now I have been using OPS/drive to measure storage system disk drive efficiency but have so far failed to come up with anything similar for flash or SSD use.  The problem with flash in storage is that it can be used as a cache or as a storage device.  Even when used as a storage device under automated storage tiering, SSD advantages can be difficult to pin down.

In my March newsletter as a first attempt to measure storage system flash efficiency I supplied a new chart shown above, which plots the top 10 NFS throughput ops/second/GB of NAND used in the SPECsfs2008 results.

What’s with Avere?

Something different has occurred with the (#1) Avere FXT 3500 44-node system in the chart.   The 44-node Avere system only used ~800GB of flash as a ZIL (ZFS intent log from the SPECsfs report).   However, the 44-node system also had ~7TB of DRAM across their 44-node system, most of which was used for file IO caching.  If we incorporated storage system memory size with flash GB in the above chart it would have dropped the Avere numbers by a factor of 9 while only dropping the others by a factor of ~2X which would still give the Avere a significant advantage but not quite so stunning.  Also, the Avere system frontends other NAS systems, (this one running ZFS) so it’s not quite the same as being a direct NAS storage system like the others on this chart.

The remainder of the chart (#2-10) belongs to NetApp and their FlashCache (or PAM) cards.  Even Oracles Sun ZFS Storage 7320 appliance did not come close to either the Avere FXT 3500 system or the NetApp storage on this chart.  But there were at least 10 other SPECsfs2008 NFS results using some form of flash but were not fast enough to rank on this chart.

Other measures of flash effectiveness

This metric still doesn’t quite capture flash efficiency.  I was discussing flash performance with another startup the other day and they suggested that SSD drive count might be a better  alternative.  With such a measure, it would take into consideration that each SSD has a only a certain performance level it can sustain, not unlike disk drives.

In that case Avere’s 44-node system had 4 drives, and each NetApp system had two FlashCache cards, representing 2-SSDs per NetApp node.  I try that next time to see if it’s  a better fit.

~~~~

The complete SPECsfs2008 performance report went out in SCI’s March newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the SPECsfs performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current NAS or file system storage performance covering SPECsfs2008 (Top 20) results and our new ChampionChart™ for NFS and CIFS storage systems, please see SCI’s NAS Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPECsfs2008 results or any of our other storage performance analyses.