
There really wasn’t that many new submissions for the Storage Performance Council SPC-1 or SPC-2 benchmarks this past quarter (just the new Fujitsu DX80S2 SPC-2 run) so we thought it time to roll out a new chart.
The chart above shows a scatter plot of the number of disk drives in a submission vs. the MB/sec attained for the Large Database Query (LDQ) component of an SPC-2 benchmark.
As one who follows this blog and our twitter feed knows we continue to have an ongoing, long running discussion on how I/O benchmarks such as this are mostly just a measure of how much hardware (disks and controllers) are thrown at them. We added a linear regression line to the above chart to evaluate the validity of that claim and as clearly shown above, disk drive count is NOT highly correlated with SPC-2 performance.
We necessarily exclude from this analysis any system results that used NAND based caching or SSD devices so as to focus specifically on disk drive count relevance. There are not a lot of these in SPC-2 results but there are enough to make this look even worse.
We chose to only display the LDQ segment of the SPC-2 benchmark because it has the best correlation or highest R**2 at 0.41 between workload and disk count. The aggregate MBPS as well as the other components of the SPC-2 benchmark include video on demand (VOD) and large file processing (LFP) both of which had R**2’s of less than 0.36.
For instance, just look at the vertical centered around 775 disk drives. There are two systems that show up here, one doing ~ 6000 MBPS and the other doing ~11,500 MBPS – quite a difference. The fact that these are two different storage architectures from the same vendor is even more informative??
Why is the overall correlation so poor?
One can only speculate but there must be something about system sophistication at work in SPC-2 results. It’s probably tied to better caching, better data layout on disk, and better IO latency but it’s only an educated guess. For example,
- Most of the SPC-2 workload is sequential in nature. How a storage system detects sequentiality in a seemingly random IO mix is an art form and what a system does armed with that knowledge is probably more of a science.
- In the old days of big, expensive CKD DASD, sequential data was all laid out in consecutively (barring lacing) around a track and up a cylinder. These days of zoned FBA disks one can only hope that sequential data resides in laced sectors, along consecutive tracks on the media, minimizing any head seek activity. Another approach, popular this last decade, has been to throw more disks at the problem, resulting in many more seeking heads to handle the workload and who care where the data lies.
- IO latency is another factor. We have discussed this before (see Storage throughput vs IO response time and why it matters. But one key to systems throughput is how quickly data gets out of cache and into the hands of servers. Of course the other part to this, is how fast does the storage system get the data from sitting on disk into cache.
Systems that do these better will perform better on SPC-2 like benchmarks that focus on raw sequential throughput.
Comments?
—–
The full SPC performance report went out to our newsletter subscribers last month. A copy of the full report will be up on the dispatches page of our website later next month. However, you can get this information now and subscribe to future newsletters to receive these reports even earlier by just sending us an email or using the signup form above right.
As always, we welcome any suggestions on how to improve our analysis of SPC results or any of our other storage system performance discussions.
One thought on “SCI’s latest SPC-2 performance results analysis – chart-of-the-month”
Comments are closed.