SPC-1/E IOPS per watt – chart of the month

SPC*-1/E IOPs per Watt as of 27Aug2010
SPC*-1/E IOPs per Watt as of 27Aug2010

Not a lot of Storage Performance Council (SPC) benchmark submissions this past quarter just a new SPC-1/E from HP StorageWorks on their 6400 EVA with SSDs and a new SPC-1 run for Oracle Sun StorageTek 6780.  Recall that SPC-1/E executes all the same tests as SPC-1 but adds more testing with power monitoring equipment attached to measure power consumption at a number of performance levels.

With this chart we take another look at the storage energy consumption (see my previous discussion on SSD vs. drive energy use). As shown above we graph the IOPS/watt for three different performance environments: Nominal, Medium, and High as defined by SPC.  These are contrived storage usage workloads to measure the varibility in power consumed by a subsystem.  SPC defines the workloads as follows:

  • Nominal usage is 16 hours of idle time and 8 hours of moderate activity
  • Medium usage is 6 hours of idle time, 14 hours of moderate activity, and 4 hours of heavy activity
  • High usage is 0 hours of idle time, 6 hours of moderate activity and 18 hours of heavy activity

As for activity, SPC defines moderate activity at 50% of the subsystem’s maximum SPC-1 reported performance and heavy activity is at 80% of its maximum performance.

With that behind us, now on to the chart.  The HP 6400 EVA had 8-73GB SSD drives for storage while the two Xiotech submissions had 146GB/15Krpm and 600GB/15Krpm drives with no flash.  As expected the HP SSD subsystem delivered considerably more IOPS/watt at the high usage workload – ~2X the Xiotech with 600GB drives and ~2.3X the Xiotech with 146GB drives.  The multipliers were slightly less for moderate usage but still substantial nonetheless.

SSD nominal usage power consumption

However, the nominal usage bears some explanation.  Here both Xiotech subsystems beat out the HP EVA SSD subsystem at nominal usage with the 600GB drive Xiotech box supporting ~1.3X the IOPS/watt of the HP SSD system. How can this be?  SSD idle power consumption is the culprit.

The HP EVA SSD subsystem consumed ~463.1W at idle while the Xiotech 600GB only consumed ~23.5W and the Xiotech 146GM drive subsystem consumed ~23.4w.  I would guess that the drives and perhaps the Xiotech subsystem have considerable power savings algorithms that shed power when idle.  For whatever reason the SSDs and HP EVA don’t seem to have anything like this.  So nominal usage with 16Hrs of idle time penalizes the HP EVA SSD system resulting in the poors IOPS/watt for nominal usage shown above..

Rays reading: SSDs are not meant to be idled alot and disk drives, especially the ones that Xiotech are using have very sophisticated power management that maybe SSDs and/or HP should take a look at adopting.

The full SPC performance report will go up on SCI’s website next month in our dispatches directory.  However, if you are interested in receiving this sooner, just subscribe by email to our free newsletter and we will send you the current issue with download instructions for this and other reports.

As always, we welcome any suggestions on how to improve our analysis of SPC performance information so please comment here or drop us a line.

SPC-1&-1/E results IOPS/Drive – chart of the month

Top IOPS(tm) per drive for SPC-1 & -1/E results as of 27May2010
Top IOPS(tm) per drive for SPC-1 & -1/E results as of 27May2010

The chart shown here reflects information from a SCI StorInt(tm) dispatch on the latest Storage Performance Council benchmark performance results and depicts the top IO operations done per second per installed drive for SPC-1 and SPC-1/E submissions.  This particular storage performance  metric is one of the harder ones to game.  For example, adding more drives to perform better does nothing for this view.

The recent SPC-1 submissions were from Huwaei Symantec’s Oceanspace S2600 and S5600, Fujitsu Eternus DX400 and DX8400 and the latest IBM DS8700 with EasyTier, SSD and SATA drives were added. Of these results, the only one to show up on this chart was the low-end Huawei Symantec S2600.  It used only 48 drives and attained ~17K IOPS as measured by SPC-1.

Other changes to this chart included the addition of Xiotech’s Emprise 5000 SPC-1/E  runs with both 146GB and 600GB drives.  We added the SPC-1/E results because they execute the exact same set of tests and generate the same performance summaries.

It’s very surprising to see the first use of 600GB drives in an SPC-1/E benchmark to show up well here and the very respectable #2 result from their 146GB drive version indicates excellent drive performance yields.  The only other non-146GB drive result was for the Fujitsu DX80 which used 300GB drives.

Also as readers of our storage performance dispatches may recall the Sun (now Oracle) J4400 array provided no RAID support for their benchmark run.  We view this as an unusable configuration and although it’s advantages vis a vis IOPS/drive are probably debatable.

A couple of other caveats to this comparison,

  • We do not include pure SSD configurations as they would easily dominate this metric.
  • We do not include benchmarks that use 73GB drives as they would offer a slight advantage and such small drives are difficult to purchase nowadays.

We are somewhat in a quandary about showing mixed drive (capacity) configurations.  In fact an earlier version of this chart without the two Xiotech SPC-1/E results showed the IBM DS8700 EasyTier configuration with SSDs and rotating SATA disks.  In that version the DS8700 came in at a rough tie with the then 7th place Fujitsu’s ETERNUS2000 subsystem.  For the time being, we have decided not to include mixed drive configurations in this comparison but would welcome any feedback on this decision.

As always, we appreciate any comments on our performance analysis. Also if you are interested in receiving your own free copy of our newsletter with the full SPC performance report in it please subscribe to our newsletter.  The full report will be made available on the dispatches section of our website in a couple of weeks.

SPC-1 Results IOPs vs. Capacity – chart of the month

SPC-1* IOPS vs. Capacity, (c) 2010 Silverton Consuliting, All Rights Reserved
SPC-1* IOPS vs. Capacity, (c) 2010 Silverton Consuliting, All Rights Reserved

This chart is from SCI’s last months report on recent Storage Performance Council (SPC) benchmark results. There were a couple of new entries this quarter but we decided to introduce this new chart as well.

This is a bubble scatter plot of SPC-1(TM) (online transaction workloads) results. Only storage subsystems that cost less than $100/GB, trying to introduce some fairness.

  • Bubble size is a function of the total cost of the subsystem
  • Horizontal access is subsystem capacity in GB
  • Vertical access is peak SPC-1 IOPS(TM)

Also we decided to show a linear regression line and equation to better analyze the data. As shown in the chart there is a pretty good correlation between capacity and IOPS (R**2 of ~0.8). The equation parameters can be read from the chart but it seems pretty tight from a visual perspective.

The one significant outlier here at ~250K IOPS is TMS RAMSAN which uses SSD technology. The two large bubbles at the top right were two IBM SVC 5.1 runs at similar backend capacity. The top SVC run had 6 nodes and the bottom SVC run only had 4.

As always, a number of caveats to this:

  • Not all subsystems on the market today are benchmarked with SPC-1
  • The pricing cap eliminated high priced storage from this analysis
  • IOPS may or may not be similar to your workloads.

Nevertheless, most storage professionals come to realize that having more disks can often result in better performance. This is often confounded by RAID type used, disk drive performance, and cache size. However, the nice thing about SPC-1 runs, is that most (nearly all) use RAID 1, have the largest cache size that makes sense, and the best performing disk drives (or SSDs). The conclusion cannot be more certain – the more RAID 1 capacity one has the higher the number of IOPS one can attain from a given subsystem.

The full SPC report went out to our newsletter subscribers last month and a copy of the report will be up on the dispatches page of our website later this month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

As always, we welcome any suggestions on how to improve our analysis of SPC or any of our other storage system performance results. This new chart was a result of one such suggestion.

Latest SPC-1 IOPS vs LRT Chart Of The Month

SPC-1* IOPS v LRT for storage subsystems under $100/GB with subsystem price ($K) as bubble size
SPC-1* IOPS v LRT with subsystem cost as bubble size, (C) 2009 Silverton Consulting, Inc.
This chart was included in our last months newsletter and shows relative costs of subsystem storage as well as subsystems performance on two axis SPC-1 IO operations per second and measured Least Response Time.

Having the spreadsheet, I can easily tell which bubble is which subsystem but have yet to figure out an easy way for Excel to label the bubbles. For example the two largest bubbles with highest IOPs performance are the IBM SVC4.3 and 3PAR Inserv T800 subsystems.

The IBM SVC is a storage virtualization engine which has 16-DS4700 storage subsystems behind it with 8-SVC nodes using 1536-146GB drives at a total cost of $3.2M. Whereas the 3PAR has 8 T-Series controller nodes with 1280-146GB drives at a total cost of $2.1M.

I am constantly looking for new ways to depict storage performance data and found that other than the lack of labels, this was almost perfect. It offered both IOPS and LRT performance metrics as well as subsystem price in one chart.

This chart and others like it were sent out in last months SCI newsletter. If you are interested in receiving your own copy of next months newsletter please drop me an email
SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter

*Information for this chart is from the Storage Performance Council and can be found StoragePerformance.org