Latest ESRP results for 1K and under mailboxes – chart of the month

SCIESRP120724(004) (c) 2012 Silverton Consulting, All Rights Reserved

The above chart was from our July newsletter Exchange Solution Reviewed Program (ESRP) performance analysis for 1000 and under mailbox submissions. I have always liked response times as they seem to be mostly the result of tight engineering, coding and/or system architecture.  Exchange response times represent a composite of how long it takes to do a database transaction (whether read, write or log write).  Latencies are measured at the application (Jetstress) level.

On the chart we show the top 10 data base read response times for this class of storage.  We assume that DB reads are a bit more important than writes or log activity but they are all probably important.  As such,  we also show the response times for DB writes and log writes but the ranking is based on DB reads alone.

In the chart above, I am struck by the variability in write and log write performance.  Writes range anywhere from ~8.6 down to almost 1 msec. The extreme variability here begs a bunch of questions.  My guess is the wide variability probably signals something about caching, whether it’s cache size, cache sophistication or drive destage effectiveness is hard to say.

Why EMC seems to dominate DB read latency in this class of storage is also interesting. EMC’s Celerra NX4, VNXe3100, CLARiiON CX4-120, CLARiiON AX4-5i, Iomega ix12-300 and VNXe3300 placed in the top 6 slots, respectively.  They all had a handful of disks (4 to 8), mostly 600GB or larger and used iSCSI to access the storage.  It’s possible that EMC has a great iSCSI stack, better NICs or just better IO scheduling. In any case, they have done well here at least with read database latencies.  However, their write and log latency was not nearly as good.

We like ESRP because it simulates a real application that’s pervasive in the enterprise today, i.e., email.  As such, it’s less subject to gaming, and typically shows a truer picture of multi-faceted storage performance.

~~~~

The complete ESRP performance report with more top 10 charts went out in SCI’s July newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well).  However, you can get the ESRP performance analysis now and subscribe to future free newsletters by just using the signup form above right.

For a more extensive discussion of current SAN block system storage performance covering SPC (Top 30) results as well as ESRP results with our new ChampionsChart™ for SAN storage systems, please see SCI’s SAN Storage Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.


Latest SPECsfs2008 results – chart of the month

(SCISFS110318-003) (c) 2011 Silverton Consulting, Inc., All Rights Reserved
(SCISFS110318-003) (c) 2011 Silverton Consulting, Inc., All Rights Reserved

The above chart comes from our last month’s newsletter on the lastest SPECsfs2008 file system performance benchmark results and depicts a scatter plot of system NFS throughput operations per second versus the number of disk drives in the system being tested.  We eliminate from this chart any system that makes use of Flash Cache/SSDS or any other performance use of NAND (See below on why SONAS was still included).

One constant complaint of benchmarks is that system vendors can just throw hardware at the problem to attain better results.   The scatter plot above is one attempt to get to the truth in that complaint.

The regression equation shows that NFS throughput operations per second = 193.68*(number of disk drives) + 23834. The regression coefficient (R**2) is 0.87 which is pretty good but not exactly perfect. So given these results, one would have to conclude there is some truth in the complaint but it doesn’t tell the whole story. (Regardless of how much it pains me to admit it).

A couple of other interesting things about the chart:

  • IBM released a new SONAS benchmark with 1975 disks, with 16 interface and 10 storage nodes to attain its 403K NFS ops/second. Now the SONAS had 512GB of NV Flash, which I assume is being used for redundancy purposes on writes and not as a speedup for read activity. Also the SONAS system complex had over 2.4TB of cache (includes the NV Flash).  So there was a lot of cache to throw at the problem.
  • HP BL860c results were from a system with 1480 drives, 4 nodes (blades) and ~800GB of cache to attain its 333KNFS ops/second.

(aside) Probably need to do a chart like this with amount of cache as the x variable (/aside)

In the same report we talked about the new #1 performing EMC VNX Gateway  that used 75TB of SAS-SSDs and 4 VNX5700’s as its backend. It was able to reach 497K NFS ops/sec.   It doesn’t show up on this chart because of its extensive use of SSDs.  But according to the equation above one would need to use ~2500 disk drives to attain similar performance without SSDS and I believe, a whole lot of cache.

—-

The full performance dispatch will be up on our website after the middle of next month (I promise) but if one is interested in seeing it sooner sign up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send the current issue along with download instructions for this and other reports.  If you need an even more in-depth analysis of NAS system performance please consider purchasing SCI’s NAS Buying Guide also available from our website.

As always, we welcome any constructive suggestions on how to improve any of our storage performance analysis.

Comments?