SPC-1/E IOPS per watt – chart of the month

SPC*-1/E IOPs per Watt as of 27Aug2010
SPC*-1/E IOPs per Watt as of 27Aug2010

Not a lot of Storage Performance Council (SPC) benchmark submissions this past quarter just a new SPC-1/E from HP StorageWorks on their 6400 EVA with SSDs and a new SPC-1 run for Oracle Sun StorageTek 6780.  Recall that SPC-1/E executes all the same tests as SPC-1 but adds more testing with power monitoring equipment attached to measure power consumption at a number of performance levels.

With this chart we take another look at the storage energy consumption (see my previous discussion on SSD vs. drive energy use). As shown above we graph the IOPS/watt for three different performance environments: Nominal, Medium, and High as defined by SPC.  These are contrived storage usage workloads to measure the varibility in power consumed by a subsystem.  SPC defines the workloads as follows:

  • Nominal usage is 16 hours of idle time and 8 hours of moderate activity
  • Medium usage is 6 hours of idle time, 14 hours of moderate activity, and 4 hours of heavy activity
  • High usage is 0 hours of idle time, 6 hours of moderate activity and 18 hours of heavy activity

As for activity, SPC defines moderate activity at 50% of the subsystem’s maximum SPC-1 reported performance and heavy activity is at 80% of its maximum performance.

With that behind us, now on to the chart.  The HP 6400 EVA had 8-73GB SSD drives for storage while the two Xiotech submissions had 146GB/15Krpm and 600GB/15Krpm drives with no flash.  As expected the HP SSD subsystem delivered considerably more IOPS/watt at the high usage workload – ~2X the Xiotech with 600GB drives and ~2.3X the Xiotech with 146GB drives.  The multipliers were slightly less for moderate usage but still substantial nonetheless.

SSD nominal usage power consumption

However, the nominal usage bears some explanation.  Here both Xiotech subsystems beat out the HP EVA SSD subsystem at nominal usage with the 600GB drive Xiotech box supporting ~1.3X the IOPS/watt of the HP SSD system. How can this be?  SSD idle power consumption is the culprit.

The HP EVA SSD subsystem consumed ~463.1W at idle while the Xiotech 600GB only consumed ~23.5W and the Xiotech 146GM drive subsystem consumed ~23.4w.  I would guess that the drives and perhaps the Xiotech subsystem have considerable power savings algorithms that shed power when idle.  For whatever reason the SSDs and HP EVA don’t seem to have anything like this.  So nominal usage with 16Hrs of idle time penalizes the HP EVA SSD system resulting in the poors IOPS/watt for nominal usage shown above..

Rays reading: SSDs are not meant to be idled alot and disk drives, especially the ones that Xiotech are using have very sophisticated power management that maybe SSDs and/or HP should take a look at adopting.

The full SPC performance report will go up on SCI’s website next month in our dispatches directory.  However, if you are interested in receiving this sooner, just subscribe by email to our free newsletter and we will send you the current issue with download instructions for this and other reports.

As always, we welcome any suggestions on how to improve our analysis of SPC performance information so please comment here or drop us a line.

SPC-1 Results IOPs vs. Capacity – chart of the month

SPC-1* IOPS vs. Capacity, (c) 2010 Silverton Consuliting, All Rights Reserved
SPC-1* IOPS vs. Capacity, (c) 2010 Silverton Consuliting, All Rights Reserved

This chart is from SCI’s last months report on recent Storage Performance Council (SPC) benchmark results. There were a couple of new entries this quarter but we decided to introduce this new chart as well.

This is a bubble scatter plot of SPC-1(TM) (online transaction workloads) results. Only storage subsystems that cost less than $100/GB, trying to introduce some fairness.

  • Bubble size is a function of the total cost of the subsystem
  • Horizontal access is subsystem capacity in GB
  • Vertical access is peak SPC-1 IOPS(TM)

Also we decided to show a linear regression line and equation to better analyze the data. As shown in the chart there is a pretty good correlation between capacity and IOPS (R**2 of ~0.8). The equation parameters can be read from the chart but it seems pretty tight from a visual perspective.

The one significant outlier here at ~250K IOPS is TMS RAMSAN which uses SSD technology. The two large bubbles at the top right were two IBM SVC 5.1 runs at similar backend capacity. The top SVC run had 6 nodes and the bottom SVC run only had 4.

As always, a number of caveats to this:

  • Not all subsystems on the market today are benchmarked with SPC-1
  • The pricing cap eliminated high priced storage from this analysis
  • IOPS may or may not be similar to your workloads.

Nevertheless, most storage professionals come to realize that having more disks can often result in better performance. This is often confounded by RAID type used, disk drive performance, and cache size. However, the nice thing about SPC-1 runs, is that most (nearly all) use RAID 1, have the largest cache size that makes sense, and the best performing disk drives (or SSDs). The conclusion cannot be more certain – the more RAID 1 capacity one has the higher the number of IOPS one can attain from a given subsystem.

The full SPC report went out to our newsletter subscribers last month and a copy of the report will be up on the dispatches page of our website later this month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just email us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_Newsletter.

As always, we welcome any suggestions on how to improve our analysis of SPC or any of our other storage system performance results. This new chart was a result of one such suggestion.

Seagate launches their Pulsar SSD

Seagate's Pulsar SSD (seagate.com)
Seagate's Pulsar SSD (seagate.com)

Today Seagate announced their new SSD offering, named the Pulsar SSD.  It uses SLC NAND technology and comes in a 2.5″ form factor at 50, 100 or 200GB capacity.  The fact that it uses a 3GB/s SATA interface seems to indicate that Seagate is going after the server market rather than the highend storage market place but different interfaces can be added over time.

Pulsar SSD performance

The main fact that makes the Pulsar interesting is the peak write rates at 25,000 4KB aligned writes per second versus a peak read rate of 30,000.  The ratio of peak reads to peak writes 30:25 represents a significant advance over prior SSDs and presumably this is through the magic of buffering.  But once we get beyond peak IO buffering sustained 128KB writes drops to 2600, 5300, or 10,500 ops/sec for the 50, 100, and 200GB drives respectively.  Kind of interesting that this drops as capacity drops and implies that adding capacity also adds parallelism. Sustained 4KB reads for the Pulsar is speced at 30,000.

In contrast, STEC’s Zeus drive is speced at 45,000 random reads and 15,000 random writes sustained and 80,000 peak reads and 40,000 peak writes.  So performance wise the Seagate Pulsar (200GB) SSD has about ~37% the peak read and ~63% the peak write performance with ~67% the sustained read with ~70% the sustained write performance of the Zeus drive.

Pulsar reliability

The other items of interest is that Seagate states a 0.44% annual failure rate (AFR), so for a 100 Pulsar drive storage subsystem one Pulsar drive will fail every 2.27 years.  Also the Pulsar bit error rate (BER) is specified at <10E16 new and <10E15 at end of life.  As far as I can tell both of these specifications are better than STEC’s specs for the Zeus drive.

Both the Zeus and Pulsar drives support a 5 year limited warranty.  But if the Pulsar is indeed a more reliable drive as indicated by their respective specifications, vendors may prefer the Pulsar as it would require less service.

All this seems to say that reliability may become a more important factor in vendor SSD selection. I suppose once you get beyond 10K read or write IOPs per drive, performance differences just don’t matter that much. But a BER of 10E14 vs 10E16 may make a significant difference to product service cost and as such, may justify changing SSD vendors much easier. Seems to be opening up a new front in the SSD wars – drive reliability

Now if they only offered 6GB/s SAS or 4GFC interfaces…