ESRP 1K to 5Kmbox performance – chart of the month

ESRP 1001 to 5000 mailboxes, database transfers/second/spindle
ESRP 1001 to 5000 mailboxes, database transfers/second/spindle

One astute reader of our performance reports pointed out that some ESRP results could be skewed by the number of drives that are used during a run.  So, we included a database transfers per spindle chart in our latest Exchange Solution Review Program (ESRP) report on 1001 to 5000 mailboxes in our latest newsletter.  The chart shown here is reproduced from that report and shows the number of overall database transfers attained (total of read and write) for the top 10 storage subsystems reporting in the latest ESRP results.

This cut of the system performance shows a number of diverse systems:

  • Some storage systems had 20 disk drives and others had 4.
  • Some of these systems were FC storage (2), some were SAS attached storage (3), but most were iSCSI storage.
  • Mailbox counts supported by these subsystems ranged from 1400 to 5000 mailboxes.

What’s not shown is the speed of the disk spindles. Also none of these systems are using SSD or NAND cards to help sustain their respective workloads.

A couple of surprises here:

  • iSCSI systems should have shown up much worse than FC storage. True, the number 1 system (NetApp FAS2040) is FC while the numbers 2&3 are iSCSI, the differences are not that great.  It would seem that protocol overhead is not a large determinant in spindle performance for ESRP workloads.
  • The number of drives used also doesn’t seem to matter much.  The FAS2040 had 12 spindles while the AX4-5i only had 4.  Although this cut of the data should minimize drive count variability, one would think that more drives would result in higher overall performance for all drives.
  • Such performance approaches drive limits of just what a 15Krpm drive can sustain.  No doubt some of this is helped by system caching, but no amount of cache can hold all the database write and read data for the duration of a Jetstress run.  It’s still pretty impressive, considering typical 15Krpm drives (e.g., Seagate 15K.6) can probably do ~172 random 64Kbyte block IOs/second. The NetApp FAS2040 hit almost 182 database transfers/second/spindle, perhaps not 64Kbyte blocks and maybe not completely random, but impressive nonetheless.

The other nice thing about this metric, is that it doesn’t correlate that well with any other ESRP metrics we track, such as aggregate database transfers, database latencies, database backup throughput etc. So it seems to measure a completely different dimension of Exchange performance.

The full ESRP report went out to our newsletter subscribers last month and a copy of the report will be up on the dispatches page of our website later this month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just subscribe by email.

As always, we welcome any suggestions on how to improve our analysis of ESRP or any of our other storage system performance results.  This new chart was a result of one such suggestion.

ESRP results over 5K mbox-chart of the month

ESRP Results, over 5K mailboxr, normalized (per 5Kmbx) read and write DB transfers as of 30 October 2009
ESRP Results, over 5K mailbox, normalized (per 5Kmbx) read and write DB transfers as of 30 October 2009

In our quarterly study on Exchange Solution Reviewed Program (ESRP) results we show a number of charts to get a good picture of storage subsystem performance under Exchange workloads. The two that are of interest to most data centers are both the normalized and un-normalized database transfer (DB xfer) charts. The problem with un-normalized DB xfer charts is that the subsystem supporting the largest mailbox count normally shows up best, and the rest of the results are highly correlated to mailbox count. In contrast, the normalized view of DB xfers tends to discount high mailbox counts and shows a more even handed view of performance.

 

We show above a normalized view of ESRP results for the category that were available last month. A couple of caveats are warranted here:

  • Normalized results don’t necessarily scale – results shown in the chart range from 5,400 mailboxes (#1) to 100,000 mailboxes (#6). While normalization should allow one to see what a storage subsystem could do for any mailbox count. It is highly unlikely that one would configure the HDS AMS2100 to support 100,000 mailboxes and it is equally unlikely that one would configure the HDS USP-V to support 5,400 mailboxes.
  • The higher count mailbox results tend to cluster when normalized – With over 20,000 mailboxes, one can no longer just use one big Exchange server and given the multiple servers driving the single storage subsystem, results tend to shrink when normalized. So one should probably compare like mailbox counts rather than just depend on normalization to iron out the mailbox count differences.

There are a number of storage vendors in this Top 10. There are no standouts here, the midrange systems from HDS, HP, and IBM seem to hold down the top 5 and the high end subsystems from EMC, HDS, and 3PAR seem to own the bottom 5 slots.

However, Pillar is fairly unusual in that their 8.5Kmbx result came in at #4 and their 12.8Kmbx result came in at #8. In contrast, the un-normalized results for this chart appear exactly the same. Which brings up yet another caveat, when running two benchmarks with the same system, normalization may show a difference where none exists.

The full report on the latest ESRP results will be up on our website later this month but if you want to get this information earlier and receive your own copy of our newsletter – just subscribe by emailing us.