We return now to our favorite block storage benchmark, Storage Performance Council (SPC) results*. There have been three new SPC-1 submissions since our last report, TMS RamSan-630, IBM StorWize V7000 and Pillar Data Axiom 600 Series 3, one new SPC-1 also from TMS, their RamSan-630 (first all SSD system for this throughput benchmark), and an new SPC-1C (component benchmark) the new Seagate Pulsar-XT.2 SSD.
We start our discussion with the top 10 IO/sec, or as SPC calls it IOPS™
Figure 1 Top 10 SPC-1* IOPS
Higher is better here. The only new submission on this chart is the TMS RamSan-630 and came in as new number 1 at 400.5K IOPS. None of the other new submissions made into the top10 so the other systems here have all been discussed in previous dispatches. The new RamSan-630 had about 8TB of user storage using 20-640GiB SSDs. It also used 8Gb FC front and backend interfaces.
Next let’s turn to storage price performance and LRT™.
Figure 2 Top 10 LRT results
Lower is better on the LRT (least response time) results. The new RamSan continued TMS’s almost unique run with another sub 0.5msec LRT run (they came in with 0.22msec LRT). For a long time I never thought I would see sub-1 msec response times but I could never have imagined even 220 usec. let alone the all time winner TMS RamSan-040 at 90 usec. I would like a very deep dive into their code and hardware to understand this lot’s better, because whatever they’re doing deserves much wider dissemination.
Figure 3 Top 10 $/IOPS
I guess it’s official now, owning the top four slots, TMS SSDs provide the best $/IOPS around. The RamSan-630 8TB configuration came in at ~$413K and while not as cost effective ($/IOPS) as the RamSan-400 is capable of significantly more IOPS. Recall that the two SUN systems (#5&6) had no RAID protection, which would provide a significant cost advantage over the other, mostly RAID1 subsystems, but seriously increases risk of data loss.
Figure 4 Scatter plot: IOPS versus Number of Drives
We have added a new chart attempting to determine if drive counts lead to better IOPS. We have eliminated all SSD or NAND caching systems from this analysis. And the regression is pretty good. Given the regression equation above in order to match the 400.5K of IOPS for the number one TMS system one would need to field ~2240 disk drives. I have not eliminated older or slower drives from this but it clearly shows that we should be able to attain ~175 IOPS per disk drive.
The other interesting item here is that the regression seems to get worse as we get over 500 drives or more, indicating that systems capable of using these many drives effectively can seem to obtain more than ~175 IOPS per drive.
The new IBM StorWize V7000 and Pillar Data Axiom 600 Series 3 systems show up on this chart in the group between 50K and 100K IOPS.
There was only one new submission in the SPC-2 arena this past quarter and that for TMS RamSan-630.
Figure 5 SPC-2 MBS results
Higher is better on MBPS chart. Recall that the SPC-2 MBPS is a composite score that averages storage performance for large file processing, large database query, and video on demand sequential workloads. The new TMS RamSan-630 came in at #4 at ~8300 MBPSon this chart and for some reason is the only SSD intensive system to be have a submission. Unclear why this is although throughput has not necessarily been a strongpoint for SSDs.
Figure 6 SPC2 MBPS spider chart
We have discussed this chart before because it shows how the three workloads in the MBPS composite compare by system. I find it interesting that the VOD workload is significantly different for four out of the top 5 systems versus the RamSan-630. Now, a majority of these workloads are heavily sequential but there is obviously something else going on with LFP and VOD. Given the above, I would say there must be a heavier write component to the LFP workload (relative to the VOD and LDQ workloads) and a heavier random component to the VOD workload (relative to the LFP and LDQ workloads).
It’s good to see the SPC-1 recent submissions by IBM, Pillar Data and TMS and the SPC-2 submission from TMS. Performance continues to be an important metric for block storage subsystems and random and sequential performance are complementary views of storage performance.
Although this time SSDs took center stage, we know how to do better if necessary with disks alone. Nonetheless, it seems on a price-performance basis, SSDs almost can’t be beat.
There seems to be a lack of SSD SPC-2 benchmarks. There doesn’t seem to be any real reason why SSDs couldn’t make a better showing here if they wanted. I keep seeing a NAND storage vendor’s multi-TV monitor showing literally 1000s of videos simultaneously which means it ought to be able to kill the VOD portion of the benchmark. Of course how SPC would handle a PCIe storage device is another question. Perhaps we need a SPC-2C benchmark.
As always, any suggestions on how to improve our performance analyses are welcomed. Additionally, if you are interested in more details, we now provide a fuller version of all these charts in SCI’s SAN Storage Briefing which can be purchased separately from our website.[This performance dispatch was originally sent out to our newsletter subscribers in May of 2011. If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports. Also, if you need an even more in-depth analysis of SAN storage system features and performance please take the time to examine our recently revised (May 2019) SAN Storage Buying Guide available for purchase from our website.]
Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community