For some time now I have been using OPS/drive to measure storage system disk drive efficiency but have so far failed to come up with anything similar for flash or SSD use. The problem with flash in storage is that it can be used as a cache or as a storage device. Even when used as a storage device under automated storage tiering, SSD advantages can be difficult to pin down.
In my March newsletter as a first attempt to measure storage system flash efficiency I supplied a new chart shown above, which plots the top 10 NFS throughput ops/second/GB of NAND used in the SPECsfs2008 results.
What’s with Avere?
Something different has occurred with the (#1) Avere FXT 3500 44-node system in the chart. The 44-node Avere system only used ~800GB of flash as a ZIL (ZFS intent log from the SPECsfs report). However, the 44-node system also had ~7TB of DRAM across their 44-node system, most of which was used for file IO caching. If we incorporated storage system memory size with flash GB in the above chart it would have dropped the Avere numbers by a factor of 9 while only dropping the others by a factor of ~2X which would still give the Avere a significant advantage but not quite so stunning. Also, the Avere system frontends other NAS systems, (this one running ZFS) so it’s not quite the same as being a direct NAS storage system like the others on this chart.
The remainder of the chart (#2-10) belongs to NetApp and their FlashCache (or PAM) cards. Even Oracles Sun ZFS Storage 7320 appliance did not come close to either the Avere FXT 3500 system or the NetApp storage on this chart. But there were at least 10 other SPECsfs2008 NFS results using some form of flash but were not fast enough to rank on this chart.
Other measures of flash effectiveness
This metric still doesn’t quite capture flash efficiency. I was discussing flash performance with another startup the other day and they suggested that SSD drive count might be a better alternative. With such a measure, it would take into consideration that each SSD has a only a certain performance level it can sustain, not unlike disk drives.
In that case Avere’s 44-node system had 4 drives, and each NetApp system had two FlashCache cards, representing 2-SSDs per NetApp node. I try that next time to see if it’s a better fit.
The complete SPECsfs2008 performance report went out in SCI’s March newsletter. But a copy of the report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the SPECsfs performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.
For a more extensive discussion of current NAS or file system storage performance covering SPECsfs2008 (Top 20) results and our new ChampionChart™ for NFS and CIFS storage systems, please see SCI’s NAS Buying Guide available from our website.
As always, we welcome any suggestions or comments on how to improve our analysis of SPECsfs2008 results or any of our other storage performance analyses.