As an ongoing commitment to maintain SCI StorInt™ briefings and presentations we periodically update storage and NFS performance charts. We use this StorInt™ Dispatch to publish these results for general readers.
SPC-1 IOPS™ results
It’s clear from the IO operations/sec (IOPS™) results above that SVC can sustain some heavy aggregate workloads. What’s surprising though is the IBM DS8300 Turbo performance, first beating Texas Memory RamSan which is essentially a RAM disk, and second placing above all the rest of the single subsystem configurations – very impressive.
SPC-1 LRT™ results
Once again the surprise here is the performance of the DS8300 turbo. Although aggregate IO/sec (see above) is typically correlated to subsystem sophistication, great Least response (LRT™) time is typically a result of less sophistication (less work to do) and raw horsepower. Due to these reasons the best LRT™ performance typically falls to mid-range systems.
Again it is reasonable for TMS RamSan to place well here as it is a RAM disk although it takes a good deal of smart code and horsepower to serve up a random 512 byte block under 200 microseconds. LSI’s 3rd place and Datacore’s 4th place are more easily explainable as mid-range subsystems with the right code and right horsepower to do LRT™ well.
What’s surprising here is RamSan’s 1st place finish. RAM disks are generally very expensive. The remaining top 5 are all mid-range systems that would typically fare well in a cost per performance comparison.
SPC-2 MBPS results
Where no RAID type is listed the product was configured as RAID1. What’s surprising here is the good MB/sec (MBPS™) showing of RAID5 subsystems. It’s not apparent from the chart but this benchmark consists of mostly sequential read so there is little RAID5 parity write penalty. A key lesson is that for sequential read RAID4, 5 or 6 will perform just as well as RAID1 at considerably less cost.
SPEC SFS NFS ops/sec un-normalized results
The NFS results have a clear winner. NetApp GX is their grid-oriented product which incorporated and updated the Spinnaker technology. To attain the over 1 million NFS ops/sec NetApp configured a 24 node with 4 cores each or 96 cores – a lot of horsepower. EMC Celerra took a similar tack using an 8-blade configuration. Also, Panasas followed the crowd and used 60 nodes for their benchmark. The outlier is BlueArc which has proprietary hardware that allows them to do well with only 2 nodes.
SPEC SFS NFS ops/sec normalized results
Here one can see the advantage that BlueArc has with their proprietary hardware. In fact the current and prior generation BlueArc hold the top 7 slots on this cut of NFS performance.
This performance dispatch was sent out to our newsletter subscribers in August of 2007. If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports. Also, if you need an even more in-depth analysis of NAS or SAN storage system features and performance please take the time to examine our NAS and SAN Storage Briefings available for purchase from our website.
A PDF version of this can be found atSCI 2007 August 15 SPC and SPEC® SFS97 Performance Results Update