Above we reproduce a chart from our latest newsletter StorInttm Dispatch on SPECsfs(R) 2008 benchmark results. This chart shows the top 10 CIFS throughput benchmark results as of the end of last year. As observed in the chart Apple’s Xserve running Snow Leopard took top performance with over 40K CIFS throughput operations per second. My problem with this chart is that there are no enterprise class systems represented in the top 10 or for that matter (not shown in the above) in any CIFS result.
Now some would say it’s still early yet in the life of the 2008 benchmark but it has been out now for 18 months and still has not a single enterprise class system submission reported. Possibly, CIFS is not considered an enterprise class protocol but I can’t believe that given the proliferation of Windows. So what’s the problem?
I have to believe it’s part tradition, part not wanting to look bad, and part just lack of awareness on the part of CIFS users.
- Traditionally, NFS benchmarks were supplied by SPECsfs and CIFS benchmarks were supplied elsewhere, i.e., NetBenc. However, there never was a central repository for NetBench results so comparing system performance was cumbersome at best. I believe that’s one reason for SPECsfs’s CIFS benchmark. Seeing the lack of a central repository for a popular protocol, SPECsfs created their own CIFS benchmark.
- Performance on system benchmarks are always a mixed bag. No-one wants to look bad and any top performing result is temporary until the next vendor comes along. So most vendors won’t release a benchmark result unless it shows well for them. Not clear if Apple’s 40K CIFS ops is a hard number to beat, but it’s been up there for quite awhile now, and has to tell us something.
- CIFS users seem to be aware and understand NetBench but don’t have similar awareness on SPECsfs CIFS benchmark yet. So, given today’s economic climate, any vendor wanting to impress CIFS customers would probably choose to ignore SPECsfs and spend their $s on NetBench. The fact that comparing results was neigh impossible, could be considered an advantage for many vendors.
So SPECsfs CIFS just keeps going on. One way to change this dynamic is to raise awareness. So as more IT staff/consultants/vendors discuss SPECsfs CIFS results, its awareness will increase. I realize some of my analysis on CIFS and NFS performance results doesn’t always agree with the SPECsfs party line, but we all agree that this benchmark needs wider adoption. Anything that can be done to facilitate that deserves my (and their) support.
So for all my storage admins, CIOs and other influencers of NAS system purchases friends out there, you need to start asking to about SPECsfs CIFS benchmark results. All my peers out their in the consultant community, get on the bandwagon. As for my friends in the vendor community, SPECsfs CIFS benchmark results should be part of any new product introduction. Whether you want to release results is and always will be, a marketing question but you all should be willing to spend the time and effort to see how well new systems perform on this and other benchmarks.
Now if I could just get somebody to define an iSCSI benchmark, …
Our full report on the latest SPECsfs 2008 results including both NFS and CIFS performance, will be up on our website later this month. However, you can get this information now and subscribe to future newsletters to receive the full report even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.
2 thoughts on “Latest SPECsfs2008 CIFS performance – chart of the month”
The reason for the lack of SPEC SFS submission is quite a bit deeper than mere lack of awareness — the benchmark itself is busted beyond repair, and SPEC shows no interest in fixing the problems. I elaborated on this in great detail in my eulogy for SPEC SFS.
Thanks for the eulogy link. Seems like your main concern is the lack of any pricing in SpecSFS benchmarks (past, present, and future) and certainly your other point about SpecSFS workload parameters are worthy of discussion as well. It’s hard to argue your lack of pricing as a prime objection to SpecSFS.
However, benchmarks without system pricing can supply a useful upper bound for the performance of a system. Also, benchmarks (even SpecSFS) typically require a detailing of the hardware used for the benchmark and as such, can be used to understand hardware oriented metrics, e.g., for SpecSFS NFS throughput operations per hard drive/spindle. Although seldom done, such a metric can be used to compare system performance without pricing at least from a ops/spindle perspective.
In addition, while I would agree that a working set size of 10% will cripple most caching algorithms, the question from a benchmark perspective is do you want the cache active or inactive during the benchmark. As you indicate having a variable or set-able working set size would at least handle the complete range from 100% cache miss to 100% cache hit, lacking that, I would prefer one that disables the cache making it inactive (100% miss) rather than accessing only data from cache (100% hit) masking I/O activity.
Nonetheless, what you say is important. I hope that SPEC listens and modifies the benchmark to be better next time.
The funny thing about SPECsfs 2008 today is that most of the submissions are mid-range systems. Only lately have a couple of vendors come in with higher end systems. I attribute that to the newness of the benchmark and the unwillingness of any vendor to look bad in comparison to the older SPECsfs 97 bencmark.
Comments are closed.