SCI’s latest SPEC sfs2008 & sfs2014 performance report as of December’15

In ORT, SPECsfs2008 by AdministratorLeave a Comment

 

This Storage Intelligence (StorInt™) dispatch covers the new SPECsfs2014 and SPECsfs2008 benchmarks[1]. SPECsfs2008 is in “retirement status” and as a result, SPEC is no longer accepting any new submissions for SPECsfs2008.

As for SPECsfs2014, there are still no new submissions other than the original 4-references solutions (one for each workload), submitted by the SPEC SFS®. Since the transition occurred, we have discussed the new benchmarks, NFS vs. CIFS/SMB ORT, new report formats and as of last quarter, NFS throughput per node. This quarter we review another of our seldom seen SPECsfs2008 charts.

SPECsfs2008 results

Figure 1 is a scatter plot of hybrid and SSD only storage NFS ORT vs. flash capacity.

SCISFS1512-001Figure 1 SPECsfs2008 hybrid & SSD storage NFS ORT vs. flash/SSD capacity

We have never shown this plot in before. In Figure 1, we show NFS Overall Response Time (ORT) for hybrid and all flash storage against flash capacity. For block storage, SSD only or hybrid storage provide the best response time performance and we wanted to see how flash capacity impacted NFS ORT. But SPECsfs2008 ORT is measured as an average across the whole benchmark whereas block storage benchmark response times (SPC-1 LRT) are measured at only 10% of max IO activity. In Figure 1 both hybrid and SSD only regression lines are linear and the horizontal axis is logarithmic. “SSD only” systems have a majority of capacity in flash.

With an R**2 of less than 0.013, flash size doesn’t seem to have any impact on hybrid NFS ORT, at least not until you get over 100TB of flash. In contrast, flash capacity does impact SSD-only NFS ORT, with an R**2 of over 0.53. Not sure why it gets worse though. It may that the higher throughput available from more flash results in slower response times at the top end, which drives up average response time (ORT). But that’s only a guess, don’t really know for sure.

Significance

It’s now been over four quarters since SPECsfs2014 has been released. By this time in the SPECsfs2008 introduction there were at least 11 non-reference NFS submissions. The lack of any vendor submissions for SPECsfs2014, continues to be an open concern.

Yes, the changeover from SPEC SFS97_R1 to SPECsfs2008 was minor compared to the changeover from SPECsfs2008 to SPECsfs2014. But the changes were made to make the benchmark be more realistic and current. No submissions indicate there are bigger problems with the benchmark. Perhaps, results aren’t comparable, making vendors reluctant to release new benchmarks that show worse performance than older systems. In any case, if we don’t start seeing vendor submissions by March 2016, we may have to report on other benchmarks.

Until then, as always, suggestions on how to improve any of our performance analyses are welcomed…

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (April 2019) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in December of 2015.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPECsfs2014 information is available at https://www.spec.org/sfs2014/ & SPECsfs2008 is available at https://www.spec.org/sfs2008/ as of 23Dec2015

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.