Top Ten RayOnStorage Posts for 2012

Here are the top 10 blog posts for 2012 from RayOnStorage.com

1. Snow Leopard to Mountain Lion

We discuss our Mac OSX transition from Snow Leopard to Mountain Lion with the good, bad and ugly of Mountain Lion from a novice user’s perspective.

2. Vsphere 5.1 storage enhancements and future vision

We detail some of the storage enhancements and directions for the latest revision of VMware Vsphere 5.1

3.  Object Storage Summit wrap up

We discuss last months ExecEvent Object Storage Summit and some of the use cases driving customers to adopt object storage for their data centers.

4. EMCWorld2012 part 1 – VNX/VNXe

We analyze the first day of EMCWorld2012 focused on EMC’s VNX/VNXe product enhancements.

5. Dell Storage Forum 2012 – day 2

We discuss the new Compellent and FluidFS systems coming out of Dell Storage Forum and their latest RNA Networks acquisition with a coherent Flash Cache network.

6. EMC buys ExtremeIO

Right before EMCWorld2012, EMC announced their purchase of ExtremeIO which was rumored for sometime but signaled a new path to flash only SAN storage systems.

7. HDS Influencer Summit wrap up

HDS held their Influencer Summit last month and rolled out their executive team to talk about their storage and service directions and successes.

8. Oracle finally releases StorageTek VSM6

Well after much delay we finally get to see the latest generation Virtual Storage Manager 6 (VSM6) for the mainframe System z market place.

9. Coraid, first thoughts

We got to meet with Coraid as part of a Storage TechField Day event and we came away impressed but still wanting to learn more.

10. Latest SPC-1 results IOPS vs. drive counts – chart of the month

Every month (or so) we do a more detailed analysis of a chart that appears in our free monthly newsletter, this was done earlier in the year and documented the correlation between IOPS and drive counts in SPC-1 results.

Happy New Year.

Latest SPC-1 results – IOPS vs drive counts – chart-of-the-month

Scatter plot of SPC-1  IOPS against Spindle count, with linear regression line showing Y=186.18X + 10227 with R**2=0.96064
(SCISPC111122-004) (c) 2011 Silverton Consulting, All Rights Reserved

[As promised, I am trying to get up-to-date on my performance charts from our monthly newsletters. This one brings us current up through November.]

The above chart plots Storage Performance Council SPC-1 IOPS against spindle count.  On this chart, we have eliminated any SSD systems, systems with drives smaller than 140 GB and any systems with multiple drive sizes.

Alas, the regression coefficient (R**2) of 0.96 tells us that SPC-1 IOPS performance is mainly driven by drive count.  But what’s more interesting here is that as drive counts get higher than say 1000, the variance surrounding the linear regression line widens – implying that system sophistication starts to matter more.

Processing power matters

For instance, if you look at the three systems centered around 2000 drives, they are (from lowest to highest IOPS) 4-node IBM SVC 5.1, 6-node IBM SVC 5.1 and an 8-node HP 3PAR V800 storage system.  This tells us that the more processing (nodes) you throw at an IOPS workload given similar spindle counts, the more efficient it can be.

System sophistication can matter too

The other interesting facet on this chart comes from examining the three systems centered around 250K IOPS that span from ~1150 to ~1500 drives.

  • The 1156 drive system is the latest HDS VSP 8-VSD (virtual storage directors, or processing nodes) running with dynamically (thinly) provisioned volumes – which is the first and only SPC-1 submission using thin provisioning.
  • The 1280 drive system is a (now HP) 3PAR T800 8-node system.
  • The 1536 drive system is an IBM SVC 4.3 8-node storage system.

One would think that thin provisioning would degrade storage performance and maybe it did but without a non-dynamically provisioned HDS VSP benchmark to compare against, it’s hard to tell.  However, the fact that the HDS-VSP performed as well as the other systems did with much lower drive counts seems to tell us that thin provisioning potentially uses hard drives more efficiently than fat provisioning, the 8-VSD HDS VSP is more effective than an 8-node IBM SVC 4.3 and an 8-node (HP) 3PAR T800 systems, or perhaps some combination of these.

~~~~

The full SPC performance report went out to our newsletter subscriber’s last November.  [The one change to this chart from the full report is the date in the chart’s title was wrong and is fixed here].  A copy of the full report will be up on the dispatches page of our website sometime this month (if all goes well). However, you can get performance information now and subscribe to future newsletters to receive these reports even earlier by just sending us an email or using the signup form above right.

For a more extensive discussion of block or SAN storage performance covering SPC-1&-2 (top 30) and ESRP (top 20) results please consider purchasing our recently updated SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions on how to improve our analysis of SPC results or any of our other storage system performance discussions.

Comments?

SCI May 2011, Latest SPC-1 results IOPS vs. drive count – chart of the month

SCISPC110527-004 (c) 2011 Silverton Consulting, Inc., All Rights Reserved
SCISPC110527-004 (c) 2011 Silverton Consulting, Inc., All Rights Reserved

The above chart is from our May Storage Intelligence newsletter dispatch on system performance and shows the latest Storage Performance Council SPC-1 benchmark results in a scatter plot with IO/sec [or IOPS(tm)] on the vertical axis and number of disk drives on the horizontal axis.  We have tried to remove all results that used NAND flash as a cache or SSDs. Also this displays only results below a $100/GB.

One negative view of benchmarks such as SPC-1 is that published results are almost entirely due to the hardware thrown at it or in this case, the number of disk drives (or SSDs) in the system configuration.  An R**2 of 0.93 shows a pretty good correlation of IOPS performance against disk drive count and would seem to bear this view out, but is an incorrect interpretation of the results.

Just look at the wide variation beyond the 500 disk drive count versus below that where there are only a few outliers with a much narrower variance. As such, we would have to say that at some point (below 500 drives), most storage systems can seem to attain a reasonable rate of IOPS as a function of the number of spindles present, but after that point the relationship starts to break down.  There are certainly storage systems at the over 500 drive level that perform much better than average for their drive configuration and some that perform worse.

For example, consider the triangle formed by the three best performing (IOPS) results on this chart.  The one at 300K IOPS with ~1150 disk drives is from Huawei Symantec and is their 8-node Oceanspace S8100 storage system whereas the other system with similar IOPS performance at ~315K IOPS used ~2050 disk drives and is a 4-node, IBM SVC (5.1) system with DS8700 backend storage.   In contrast, the highest performer on this chart at ~380K IOPS, also had ~2050 disk drives and is a 6-node IBM SVC (5.1) with DS8700 backend storage.

Given the above analysis there seems to be much more to system performance than merely disk drive count, at least at the over 500 disk count level.

—-

The full performance dispatch will be up on our website after the middle of next month but if you are interested in viewing this today, please sign up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we’ll send you the current issue.  If you need a more analysis of SAN storage performance please consider purchasing SCI’s SAN Storage Briefing.

As always, we welcome all constructive suggestions on how to improve any of our storage performance analyses.

Comments?

 

SCI’s latest SPC-1&-1/E LRT results – chart of the month

(c) 2010 Silverton Consulting, Inc., All Rights Reserved
(c) 2010 Silverton Consulting, Inc., All Rights Reserved

It’s been a while since we reported on Storage Performance Council (SPC) Least Response Time (LRT) results (see Chart of the month: SPC LRT[TM]).  This is one of the charts we produce for our monthly dispatch on storage performance (quarterly report on SPC results).

Since our last blog post on this subject there have been 6 new entries in LRT Top 10 (#3-6 &, 9-10).  As can be seen here which combines SPC-1 and 1/E results, response times vary considerably.  7 of these top 10 LRT results come from subsystems which either have all SSDs (#1-4, 7 & 9) or have a large NAND cache (#5).    The newest members on this chart were the NetApp 3270A and the Xiotech Emprise 5000-300GB disk drives which were published recently.

The NetApp FAS3270A, a mid-range subsystem with 1TB of NAND cache (512MB in each controller) seemed to do pretty well here with all SSD systems doing better than it and a pair of all SSD systems doing worse than it.  Coming in under 1msec LRT is no small feat.  We are certain the NAND cache helped NetApp achieve their superior responsiveness.

What the Xiotech Emprise 5000-300GB storage subsystem is doing here is another question.  They have always done well on an IOPs/drive basis (see SPC-1&-1/E results IOPs/Drive – chart of the month) but being top ten in LRT had not been their forte, previously.  How one coaxes a 1.47 msec LRT out of a 20 drive system that costs only ~$41K, 12X lower than the median price(~$509K) of the other subsystems here is a mystery.  Of course, they were using RAID 1 but so were half of the subsystems on this chart.

It’s nice that some turnover in this top 10 LRT.  I still contend that response time is an important performance metric for many storage workloads (see my IO throughput vs. response time and why it matters post) and improvement over time validates my thesis.  Also I received many comments discussing the merits of database latencies for ESRP v3 (Exchange 2010) results, (see my Microsoft Exchange Perfomance ESRP v3.0 results – chart of the month post).  You can judge the results of that lengthy discussion for yourselves.

The full performance dispatch will be up on our website in a couple of weeks but if you are interested in seeing it sooner just sign up for our free monthly newsletter (see upper right) or subscribe by email and we will send you the current issue with download instructions for this and other reports.

As always, we welcome any constructive suggestions on how to improve our storage performance analysis.

Comments?