Latest ESRP results for 1K and under mailboxes – chart of the month

SCIESRP120724(004) (c) 2012 Silverton Consulting, All Rights Reserved

The above chart was from our July newsletter Exchange Solution Reviewed Program (ESRP) performance analysis for 1000 and under mailbox submissions. I have always liked response times as they seem to be mostly the result of tight engineering, coding and/or system architecture.  Exchange response times represent a composite of how long it takes to do a database transaction (whether read, write or log write).  Latencies are measured at the application (Jetstress) level.

On the chart we show the top 10 data base read response times for this class of storage.  We assume that DB reads are a bit more important than writes or log activity but they are all probably important.  As such,  we also show the response times for DB writes and log writes but the ranking is based on DB reads alone.

In the chart above, I am struck by the variability in write and log write performance.  Writes range anywhere from ~8.6 down to almost 1 msec. The extreme variability here begs a bunch of questions.  My guess is the wide variability probably signals something about caching, whether it’s cache size, cache sophistication or drive destage effectiveness is hard to say.

Why EMC seems to dominate DB read latency in this class of storage is also interesting. EMC’s Celerra NX4, VNXe3100, CLARiiON CX4-120, CLARiiON AX4-5i, Iomega ix12-300 and VNXe3300 placed in the top 6 slots, respectively.  They all had a handful of disks (4 to 8), mostly 600GB or larger and used iSCSI to access the storage.  It’s possible that EMC has a great iSCSI stack, better NICs or just better IO scheduling. In any case, they have done well here at least with read database latencies.  However, their write and log latency was not nearly as good.

We like ESRP because it simulates a real application that’s pervasive in the enterprise today, i.e., email.  As such, it’s less subject to gaming, and typically shows a truer picture of multi-faceted storage performance.

~~~~

The complete ESRP performance report with more top 10 charts went out in SCI’s July newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well).  However, you can get the ESRP performance analysis now and subscribe to future free newsletters by just using the signup form above right.

For a more extensive discussion of current SAN block system storage performance covering SPC (Top 30) results as well as ESRP results with our new ChampionsChart™ for SAN storage systems, please see SCI’s SAN Storage Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.


Latest ESRP 1K-5K mailbox DB xfers/sec/disk results – chart-of-the-month

(SCIESRP120429-001) 2012 (c) Silverton Consulting, All Rights Reserved

The above chart is from our April newsletter on Microsoft Exchange 2010 Solution Reviewed Program (ESRP) results for the 1,000 (actually 1001) to 5,000 mailbox category.  We have taken the database transfers per second, normalized them for the number of disk spindles used in the run and plotted the top 10 in the chart above.

A couple of caveats first, we chart disk-only systems in this and similar charts  on disk spindle performance. Although, it probably doesn’t matter as much at this mid-range level, for other categories SSD or Flash Cache can be used to support much higher performance on a per spindle performance measure like the above.  As such, submissions with SSDs or flash cache are strictly eliminated from these spindle level performance analysis.

Another caveat, specific to this chart is that ESRP database transaction rates are somewhat driven by Jetstress parameters (specifically simulated IO rate) used during the run.  For this mid-level category, this parameter can range from a low of 0.10 to a high of 0.60 simulated IO operations per second with a median of ~0.19.  But what I find very interesting is that in the plot above we have both the lowest rate (0.10 in #6, Dell PowerEdge R510 1.5Kmbox) and the highest (0.60 for #9, HP P2000 G3 10GbE iSCSI MSA 3.2Kmbx).  So that doesn’t seem to matter much on this per spindle metric.

That being said, I always find it interesting that the database transactions per second per disk spindle varies so widely in ESRP results.  To me this says that storage subsystem technology, firmware and other characteristics can still make a significant difference in storage performance, at least in Exchange 2010 solutions.

Often we see spindle count and storage performance as highly correlated. This is definitely not the fact for mid-range ESRP (although that’s a different chart than the one above).

Next, we see disk speed (RPM) can have a high impact on storage performance especially for OLTP type workloads that look somewhat like Exchange.  However, in the above chart the middle 4 and last one (#4-7 & 10) used 10Krpm (#4,5) or slower disks.  It’s clear that disk speed doesn’t seem to impact Exchange database transactions per second per spindle either.

Thus, I am left with my original thesis that storage subsystem design and functionality can make a big difference in storage performance, especially for ESRP in this mid-level category.  The range in the top 10 contenders spanning from ~35 (Dell PowerEdge R510) to ~110 (Dell EqualLogic PS Server) speaks volumes on this issue or a multiple of over 3X from top to bottom performance on this measure.  In fact, the overall range (not shown in the chart above spans from ~3 to ~110 which is a factor of almost 37 times from worst to best performer.

Comments?

~~~~

The full ESRP 1K-5Kmailbox performance report went out in SCI’s April newsletter.  But a copy of the full report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the full SPC performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current SAN or block storage performance covering SPC-1 (top 30)SPC-2 (top 30) and all three levels of ESRP (top 20) results please see SCI’s SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.


Latest Microsoft ESRP v3 (Exchange 2010) 1K to 5K mailbox performance results – chart of the month

SCIESRP110726-004 (c) 2011 Silverton Consulting, All Rights Reserved
SCIESRP110726-004 (c) 2011 Silverton Consulting, All Rights Reserved

Microsoft specifies two different metrics on sequential read rates for database backup activity in their Exchange Solution Reviewed Program (ESRP) reports

  • MB read/sec per database
  • MB read/sec total per server

Our problem with these metrics is that they don’t say much about the storage systems performance.  Some ESRP submissions could have a single database while others can have 100s of databases.  And the same thing applies to servers, although 20 servers seems to be about the max we have seen.  So as one can see the MB/s/DB or MB/s/server can vary all over the place depending on the Exchange configuration that one uses, even for the same exact storage system.

In the above chart, we  have attempted to move beyond some of these problems and use the information supplied in the ESRP reports to aggregate DB backups across all databases.  As such, we have derived a new metric called “total database backup”.  (Pretty simple actually just multiply the MB/s/DB times the number of databases in the Exchange configuration).

A couple of problems with our approach.

  • Current ESRP reports typically utilize a shadow storage system and shadow Exchange servers which host 50% of the databases and email activity. So what I am showing for those ESRP reports is what two storage systems can accomplish not one.
  • Another potential way to get the same result would be to use the number of servers times the MB/sec/server metric. (But try as I might these two approaches didn’t work to get the same answer so I am using the computation above – must be the way I am recording the number of [shadow] servers).
  • Although ESRP reports the average MB/sec/database to backup a single database it’s not clear that these measurements were taken while backing up all active databases at the same time, especially for those submissions with 100s of databases.

Probably the last is the most problematic critique to our new measure but may not be that harmful for smaller configurations. Nonetheless, we produced the above chart and published it in our last months review of ESRP results for the 1001 to 5000 mailbox category.

One item we discussed in our report was that numbers of disk drives didn’t seem to correlate well with high positions on this chart.  The number ten position (Fujitsu ETERNUS JX40) used over 140 disks, the number two position (Dell PowerEdge R510) had only 12 disk drives, and the number one solution (HP E5700) consisted of 56 drives, close to the average for this category.

One striking finding using this measure is that performance varies considerably from the top providing over 1600 MB/sec of database backup to the lowest of the group providing only ~800 MB/sec of backup performance. What with Exchange 2010 and lagged DAGs, some people feel that backup activity is no longer needed but we would disagree. We continue to believe that taking backups of Exchange data still makes a whole lot of sense and shouldn’t go away, ever.

It’s our hope that this or some similar follow-on metric will remove some of the Exchange configuration parameters from confounding ESRP reported storage system performance results.  We realize that this quixotic quest may never be entirely successful nevertheless we perform this duty in the hope that it will benefit today and future storage performance analysts everywhere.

Comments?

—–

The full ESRP report went out to our newsletter subscribers last month.  A copy of the full report will be up on the dispatches page of our website later next month. However, you can get this information now and subscribe to future newsletters to receive these reports even earlier by just emailing us at SubscribeNews@SilvertonConsulting.com?Subject=Subscribe_to_NewsletterR or using the signup form above and to the right.

As always, we welcome any suggestions on how to improve our analysis of ESRP or any of our other storage system performance discussions.

Recent ESRP v3.0 (Exchange 2010) performance results – chart of the month

SCIESRP110127-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved
SCIESRP110127-003 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

We return to our monthly examination of storage performance, and this month’s topic is Exchange 2010 performance comparing results from the latest ESRP v3.0 (Exchange Solution Review Program).  This latest batch is for the 1K-and-under mailbox category and the log playback chart above represents the time it takes to process a 1MB log file and apply this to the mailbox database(s).  Data for this report is taken from Microsoft’s ESRP v3.0 website published results and this chart is from our latest storage intelligence performance dispatch sent out in our free monthly newsletter.

Smaller is better on the log playback chart.  As one can see it takes just under 2.4 seconds for the EMC Celerra NX4 to process a 1MB log file whereas it takes over 7.5 seconds on an EMC Iomega IX12-300r storage subsystem.  To provide some perspective in the next larger category, for storage supporting from 1K-to-5K mailboxes,  the top 10 log playback times range from ~0.3 to ~4.5 seconds and as such, the Celerra NX4 system and the other top four subsystems here would be in the top 10 log playback times for that category as well.

Why log playback

I have come to believe that log playback is an important metric in Exchange performance, for mainly one reason, it’s harder to game using Jetstress paramaterization.   For instance, with Jetstress one must specify how much IO activity is generated on a per mailbox basis, thus generating more or less requests for email database storage. Such specifications will easily confound storage performance metrics such as database accesses/second when comparing storage. But with log playback, that parameter is immaterial and every system has the same 1MB sized log file to process as fast as passible, i.e., it has to be read and applied to the configured Exchange database(s).

One can certainly still use a higher performing storage system, and/or throw SSD, more drives or more cache at the problem to gain better storage performance but that also works for any other ESRP performance metric.  But with log playback, Jetstress parameters are significantly less of a determinant of storage performance.

In the past I have favored database access latency charts for posts on Microsoft Exchange performance but there appears to be much disagreement as to the efficacy of that metric in comparing storage performance (e.g., see the 30+ comments on one previous ESRP post).  I still feel that latency is an important metric and one that doesn’t highly depend on Jetstress IO/sec/mailbox parameter but log playback is even more immune to that parm and so, should be less disputable.

Where are all the other subsystems?

You may notice that there are less than 10 subsystems on the chart. These six are the only subsystems that have published results in this 1K-and-under mailbox category.  One hopes that the next time we review this category there will be more subsystem submissions available to discuss here.  Please understand, ESRP v3.0 is only a little over a year old when our briefing came out.

—-

The full performance dispatch will be up on our website after month end but if one needs to see it sooner, please sign up for our free monthly newsletter (see subscription widget, above right) or subscribe by email and we’ll send you the current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of block storage performance please consider purchasing SCI’s SAN StorInt Briefing also available from our website.

As always, we welcome any constructive suggestions on how to improve any of our storage performance analyses.

Comments?

Latest ESRPv3 (Exchange 2010) results analysis for 1K-to-5Kmailboxes – chart of the month

(c) 2010 Silverton Consulting, Inc., All Rights Reserved
(c) 2010 Silverton Consulting, Inc., All Rights Reserved

The chart is from SCI’s October newsletter/performance dispatch on Exchange 2010 Solution Reviewed Program (ESRP v3.0) and shows the mailbox database access latencies for read, write and log write.  For this report we are covering solutions supporting from 1001 up to 5000 mailboxes (1K-to-5Kmbx), larger and (a few) smaller configurations have been covered in previous performance dispatches.  On latency charts like this – lower is better.

We like this chart because in our view this represents a reasonable measure of email user experience.  As users read and create new emails they are actually reading Exchange databases and writing database and logs.  Database and log latencies should show up as longer or shorter delays in these activities.  (Ok, not exactly true, email client and Exchange server IO aren’t the same thing.  But ultimately every email sent has to be written to an Exchange database and log sometime and every new email read-in has to come from an Exchange database as well).

A couple of caveats are in order for this chart.

  • Xiotech’s top run (#1) did not use database redundancy or DAGs (Database Availability Groups) in their ESRPv3 run. Their feeling is that this technology is fairly new and it will take some time before it’s widely adopted.
  • There is quite the mix of SAS (#2,3,6,7,9&10), FC (#1,5&8) and iSCSI (#4) connected storage in this mailbox range.  Some would say that SAS connected storage should have an advantage here but that’s not obvious from the rankings.
  • Vendors get to select the workload intensity for any ESRPv3/Jetstress run, e.g. the solutions shown here used between 0.15 IO/sec/mailbox (#9&10) and 0.36 IO/sec/mailbox (#1).  IO intensity is just one of the myriad of Jetstress tweakable parameters that make analyzing ESRP so challenging.  Normally this would only matter with database and log access counts but heavier workloads can also impact latencies as well.

Wide variance between read and write latencies

The other thing of interest in this chart is the interesting span between read latencies and write (database and log) latencies for the same solution. Take the #10 Dell PowerEdge system for example.  It showed a database read latency of ~18msec. but a database write latency of ~0.4msec.  Why?

It turns out this Dell system had only 6 disk drives (2TB/7200 RPM).  So few disk drives don’t seem adequate to support the read workload and as a result, show up poorly in database read latencies.  However, write activity can mostly be masked with cache until it fills up, forcing write delays.  With only 1100 mailboxes and 0.15 IOs/sec/mailbox, the write workload apparent fits in cache well enough to be destaged over time, without delaying ongoing write activity.  Similar results appear for the other Dell PowerEdge (#6) and the HP Smart Array (#7) which had 12-2TB/7200 RPM and 24-932GB/7200 RPM drives respectively.

On the other hand, Xiotech’s #1 position had 20-360GB/15Krpm drives and EMC’s Celerra #4 run had 15-400GB/10Krpm drives, both of which were able to sustain a more balanced performance across reads and writes (database and logs).  For Xiotech’s #5 run they used 40-500GB/10Krpm drives.

It seems there is a direct correlation between drive speed and read database latencies.  Most of the systems in the bottom half of this chart have 7200 RPM drives (except for #8, HP StorageWorks MSA) and the top 3 all had 15Krpm drives.  However, write latencies don’t seem to be as affected by drive speed and have more to do with the balance between workload, cache size and effective destaging.

The other thing that’s apparent from this chart is that SAS connected storage continues to be an effective solution for this range of Exchange configurations, following a trend first shown in ESRP v2 (Exchange 2007) results.  We reported on this in our  January ESRPv2 analysis dispatch for this year .

The full dispatch will be up on our website in a couple of weeks but if you are interested in seeing it sooner just sign up for our free newsletter (see upper right) or subscribe by email and we will send you the current issue with download instructions for this and other reports.

As mentioned previously ESRP/Jetstress results are difficult to compare/analyze and we continue to welcome any constructive suggestions on how to improve.

Exchange 2010/ESRP 3.0 results – chart of the month

(c) 2010 Silverton Consulting, Inc., All Rights Reserved
(c) 2010 Silverton Consulting, Inc., All Rights Reserved

Well after last months performance reversals and revelations we now return to the more typical review of the latest Exchange Solution Review Program (ESRP 3.0) for Exchange 2010 results.  Microsoft’s new Exchange 2010 has substantially changed the efficiency and effectiveness of Exchange database I/O.  This will necessitate a new round of ESRP results for all vendors to once again show what their storage can support in Exchange 2010 mail users.  IBM was the first vendor to take this on with their XIV and SVC results.  But within the last quarter EMC and HP also submitted results.  This marks our first blog review of ESRP 3.0 results.

We show here a chart on database latency for current ESRP 3.0 results.  The three lines for each subsystem show the latency in milliseconds for a ESE database read, database write and log write.  In prior ESRP reviews, one may recall that write latency was impacted by the Exchange redundancy in use.  In this chart all four subsystems were using database availability group redundancy (DAG) so write activity should truly show subsystem overhead and not redundancy options.

Unclear why IBM’s XIV showed up so poorly here.  The HP EVA 8400 is considered a high end subsystem but all the rest are midrange.  If one considered drives being used – the HP used 15Krpm FC disk drives, the SVC used 15Krpm SAS drives and both the CLARiiON and the XIV used 7.2Krpm SATA drives.  Still doesn’t explain the poor showing yet.

Of course the XIV had the heaviest user mail workload at 40,000 user mailboxes being simulated and it did perform relatively better from a normalized database transactions perspective (not shown).  Given all this perhaps this XIV submission was intended to show the top end of what the XIV could do from a mailbox count level rather than latency.

Which points up one failing in our analysis. In past ESRP reviews we have always split results into one of three categories <1Kmbx, 1001..5Kmbx, and >5Kmbx.  As ESRP 3.0 is so new there are only 4 results to date and as such, we have focused only on “normalized” quantities in our full newsletter analysis and here.  We believe database latency should not “normally” be impacted by the count of mail users being simulated and must say we are surprised by XIVs showing because of this.  But in all fairness, it sustained 8 80 times the workload that the CLARiiON did.

Interpreting ESRP 3.0 results

As discussed above all 4 tested subsystems were operating with database availability group (DAG) redundancy and as such, 1/2 of the simulated mail user workload was actually being executed on a subsystem while the other 1/2 was being executed as if it were a DAG copy being updated on the subsystem under test.  For example, the #1 HP EVA configuration requires 2-8400s to sustain a real 9K mailbox configuration with DAG in operation.  Such a configuration would support 2 mailbox databases (with 4500 mailboxes each), one active mailbox database residing on each 8400 and the inactive copy of this database residing on it’s brethern.  (Naturally, the HP ESRP submission also supported VSS shadow copies for the DAGs which added yet another wrinkle to our comparisons.)

A couple of concerns simulating DAGs in this manner:

  • Comparing DAG and non-DAG ESRP results will be difficult at best.  It’s unclear to me whether all future ESRP 3.0 submissions will be required to use DAGs or not.  But if not, comparing DAG to non-DAG results will be almost meaningless.
  • Vendors could potentially perform ESRP 3.0 tests with less server and storage hardware. By using DAGs, the storage under test need only endure 1/2 the real mail server I/O workload and 1/2 a DAG copy workload.  The other half of this workload simulation may not actually be present as it’s exactly equivalent to the first workload.
  • Hard to determine if all the hardware was present or only half.  It’s unclear from a casual  skimming of the ESRP report whether all the hardware was tested or not.
  • 1/2 the real mail server I/O is not the same as 1/2 the DAG copy workload. As such, it’s unclear whether 1/2 the proposed configuration could actually sustain a non-DAG version of an equivalent user mailbox count.

All this makes for exciting times in interpreting current and future ESRP 3.0 results.  Look for more discussion on future ESRP results in about a quarter from now.

As always if you wish to obtain a free copy of our monthly Storage Intelligence newsletter please drop us a line. The full report on ESRP 3.0 results will be up on the dispatches section of our website later this month.

ESRP 1K to 5Kmbox performance – chart of the month

ESRP 1001 to 5000 mailboxes, database transfers/second/spindle
ESRP 1001 to 5000 mailboxes, database transfers/second/spindle

One astute reader of our performance reports pointed out that some ESRP results could be skewed by the number of drives that are used during a run.  So, we included a database transfers per spindle chart in our latest Exchange Solution Review Program (ESRP) report on 1001 to 5000 mailboxes in our latest newsletter.  The chart shown here is reproduced from that report and shows the number of overall database transfers attained (total of read and write) for the top 10 storage subsystems reporting in the latest ESRP results.

This cut of the system performance shows a number of diverse systems:

  • Some storage systems had 20 disk drives and others had 4.
  • Some of these systems were FC storage (2), some were SAS attached storage (3), but most were iSCSI storage.
  • Mailbox counts supported by these subsystems ranged from 1400 to 5000 mailboxes.

What’s not shown is the speed of the disk spindles. Also none of these systems are using SSD or NAND cards to help sustain their respective workloads.

A couple of surprises here:

  • iSCSI systems should have shown up much worse than FC storage. True, the number 1 system (NetApp FAS2040) is FC while the numbers 2&3 are iSCSI, the differences are not that great.  It would seem that protocol overhead is not a large determinant in spindle performance for ESRP workloads.
  • The number of drives used also doesn’t seem to matter much.  The FAS2040 had 12 spindles while the AX4-5i only had 4.  Although this cut of the data should minimize drive count variability, one would think that more drives would result in higher overall performance for all drives.
  • Such performance approaches drive limits of just what a 15Krpm drive can sustain.  No doubt some of this is helped by system caching, but no amount of cache can hold all the database write and read data for the duration of a Jetstress run.  It’s still pretty impressive, considering typical 15Krpm drives (e.g., Seagate 15K.6) can probably do ~172 random 64Kbyte block IOs/second. The NetApp FAS2040 hit almost 182 database transfers/second/spindle, perhaps not 64Kbyte blocks and maybe not completely random, but impressive nonetheless.

The other nice thing about this metric, is that it doesn’t correlate that well with any other ESRP metrics we track, such as aggregate database transfers, database latencies, database backup throughput etc. So it seems to measure a completely different dimension of Exchange performance.

The full ESRP report went out to our newsletter subscribers last month and a copy of the report will be up on the dispatches page of our website later this month. However, you can get this information now and subscribe to future newsletters to receive future full reports even earlier, just subscribe by email.

As always, we welcome any suggestions on how to improve our analysis of ESRP or any of our other storage system performance results.  This new chart was a result of one such suggestion.