We return now to our analysis of the latest SPECsfs® 2008* benchmark results. Unfortunately there has been only one new NFS and no new CIFS/SMB submissions since our December report. The new submission was a SGI NAS system (32TB-4U-P) storage system. Unfortunately, it did not reach top ten status for any of our SPECsfs2008 performance rankings. As all those charts are available in prior reports, we have decided to review some rarely seen charts that do have some updates from the last time they were on display.
Latest NFS results
It has been a while since we looked at this chart. On the horizontal axis is ORT (smaller is better) and on the vertical axis is NFS throughput operations per second (larger is better).
Unlike some others in the industry, we are a firm believer in response time as a significant determinant of storage system performance. As such, one can see for instance that there are a number of systems between 1.5 and 2.0 milliseconds for ORT that supply different maximum throughput rates, at anywhere from 100,000 to over 300,000.
The fact that these systems use various exported storage capacities indicate that they can be configured to support diverse file serving workloads. While it may not seem obvious to the storage market, this chart clearly shows that better throughput can be had with better response times. Thus, when choosing file system storage, one need not sacrifice throughput for response time or vice versa.
Next we turn to our view of top ten systems for NFS throughput per disk.
Although in our last report we included a scatter plot version of this in our NFS vs. CIFS/SMB comparison, this is the first in a long time we identified top ten systems. Here, one can see the relative dominance of the Avere FXT system for their 2-, 6- and 1-node systems coming in at #1,2&3 respectively. These three are then followed by BlueArc and Hitachi (HNAS) versions of BlueArc file servers. Recall, that these systems represent disk-only submissions and exclude any form of SSDs or NAND caching which have come to be used for a number of systems.
Avere’s systems provide a caching front-end to other NAS systems (in this case, servers running Linix CentOS accessing to XFS data). Also, the Avere systems use a lot of caching, ~430GB in the 6-node system. Nonetheless, they show impressive per drive performance.
BlueArc and HNAS systems all use an FPGA to accelerate data and meta-data accesses to file system data. Hence, it’s hardware assist supplies an unfair advantage against the other non-hardware based storage systems. Despite that, these systems do offer superior per drive performance.
In any event, this chart shows what systems are capable of providing if they invest in appropriate caching and/or some specialized hardware development to support accelerate file servicing.
Finally, we turn to another way to normalize performance this one for SSD and SSD-disk only file storage.
Similarly, Figure 3 displays NFS operations normalized by SSD count or FlashCard count. It prior reports, we have looked at NAND normalization by capacity as well as device count and find that device count correlates slightly better to NFS performance.
Once again we see the Avere caching front end coming in as the top ranked system. In this case the Avere FXT 3500 was frontending 4-X86 servers running OpenSolaris and ZFS. Its SSDs were used for the ZFS intent log. Also this 44-node configuration had almost 6.9TB of DRAM caching between the frontend and backend systems and included over 796 disk drives.
The next nine positions were all various versions of NetApp file storage, including non-Cluster mode, Cluster mode and mid-range storage systems, all using Flash Cache or the prior version PAM cards to accelerate file server performance.
Clearly, all the top ten results here indicate that a relatively modest amount of flash (800GB in Avere’s #1 system above and 512GB per NetApp node for all the rest) can supply a great speedup in NFS performance.
There has not been much activity these days in NFS SpecSFS2008 results. As discussed previously, the one new NFS submission from SGI was good but not good enough to reach top ten ranking for any of our charts. Also,, there hasn’t been a new CIFS/SMB submission in quite some time.
Now that Microsoft has released SMB3, having SPECsfs2008 run SMB1 is a bit dated. Of course NFSv4 has been around a long time as well and someday we hope to see an update here as well. Given the acceleration available in SMB3, if we start to seeing wider vendor adoption and its use in SPECsfs2008, the relative advantage CIFS/SMB evident today might widen considerably. Then again, NFSv4 might include some performance enhancements which might narrow the gap.
We provide more detailed file system performance and NAS system feature analysis plus our SPECsfs2008 ChampionsCharts for NFS and CIFS in our March update to the NAS Buying Guide available from our website.
We are always interesting in improving our performance analyses for SPECsfs2008 as well as other performance benchmark results. If you have any ideas as to how to improve this analysis or how to supply better insight into file or storage system performance, please don’t hesitate to drop us a line (contact information is at bottom left, in the footer).
[This performance dispatch was originally sent out to our newsletter subscribers in March of 2013. If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results you will need to signup for our newsletter.]
Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community.