Microsoft ESRP database transfer performance by storage interface – chart of the month

SCIESRP160728-001The above chart was included in our e-newsletter Microsoft Exchange Solution Reviewed Program (ESRP) performance report, that went out at the end of July. ESRP reports on a number of metrics but one of the more popular is total (reads + writes) Exchange database transfers per second.

Categories reported on in ESRP include: over 5,000 mailboxes; 1001 to 5000 mailboxes; and 1000 and under mailboxes. For the above chart we created our own category using all submissions up to 10,000 mailboxes. Then we grouped the data using the storage  interface between the host Exchange servers and the storage, and only included ESRP reports that had 10 KRPM disk drives.

Then we plotted the number of drives used in a ESRP submission on the x-axis with the total database transfers achieved on the y-axis. We added linear regression lines for each of the data groups, giving us a relative drive efficiency metric (see regression formula) for each submission.

Note: these are disk drive only systems and we are measuring database transfer activity. We have similar charts for 15 KRPM drives and for other ESRP activity (log transfers and database backup), but 10K RPM drives seem more pertinent these days, as 15 KRPM drives are being displaced.

SAS does best …

Surprisingly, SAS storage interface performed better (~104.9x) for ESRP database IO against the number of drives used, while FC performance (~61.7x) came in 2nd and iSCSI performance (~46.0x) came in last.

A couple of caveats are in order to put the above chart into the proper context:

  • There are only 3 SAS, 6 FC and 6 iSCSI interface ESRP submissions with 10K RPM drives with 10K mailboxes or under. So the results are preliminary at best, until more 10 KRPM ESRP submissions at 10K mailboxes and under are available.
  • While the SAS and iSCSI regression coefficients (R^2), at ~0.61 and ~0.75 respectively, are reasonable, but the FC regression coefficient, at ~0.31, is not that great. This means the FC ESRP submissions have a lot of variability when it comes to database transfers per second against number of disk drives, so this may be confounding (statistical) results.
  • The chart plots all current ESRP data for submissions. Some (FC ) submissions are much older than others (SAS). More recent systems tend to be more efficient with disk drive use.

All that aside, what the chart tells us is that for ESRP total database IO activity (relatively random 4KB reads/writes), SAS storage interfaces are more efficient (with respect to disk drives used) than FC or iSCSI.

… but why SAS?

The question that needs to be asked is why SAS does better? FC over iSCSI is understandable because of Ethernet protocol vs. FC, but where does SAS fit in?

I think direct access to a SAS JBOD provides some inherent advantages (less protocol overhead, no hops, no RAID overhead) that works well for smaller systems (under 10K mailboxes) but may not work as well for bigger environments that require more databases, servers, caching, IO, HA, etc.

Want more?

The July 2016 and our other ESRP reports have much more information on ESRP performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is included in SCI’s SAN-NAS Buying Guidealso available from our website .


The complete ESRP performance report went out in SCI’s July 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

As always, we welcome any suggestions or comments on how to improve our ESRP  performance reports or any of our other storage performance analyses.