QoM1610: Will NVMe over Fabric GA in enterprise AFA by Oct’2017

NVMeNVMe over fabric (NVMeoF) was a hot topic at Flash Memory Summit last August. Facebook and others were showing off their JBOF (see my Facebook moving to JBOF post) but there were plenty of other NVMeoF offerings at the show.

NVMeoF hardware availability

When Brocade announced their Gen6 Switches they made a point of saying that both their Gen5 and Gen6 switches currently support NVMeoF protocols. In addition to Brocade’s support, in Dec 2015 Qlogic announced support for NVMeoF for select HBAs. Also, as of  July 2016, Emulex announced support for NVMeoF in their HBAs.

From an Ethernet perspective, Qlogic has a NVMe Direct NIC which supports NVMe protocol offload for iSCSI. But even without NVMe Direct, Ethernet 40GbE & 100GbE with  iWARP, RoCEv1-v2, iSCSI SER, or iSCSI RDMA all could readily support NVMeoF on Ethernet. The nice thing about NVMeoF for Ethernet is not only do you get support for iSCSI & FCoE, but CIFS/SMB and NFS as well.

InfiniBand and Omni-Path Architecture already support native RDMA, so they should already support NVMeoF.

So hardware/firmware is already available for any enterprise AFA customer to want NVMeoF for their data center storage.

NVMeoF Software

Intel claims that ~90% of the software driver functionality of NVMe is the same for NVMeoF. The primary differences between the two seem to be the NVMeoY discovery and queueing mechanisms.

There are two fabric methods that can be used to implement NVMeoF data and command transfers: capsule mode where NVMe commands and data are encapsulated in normal fabric packets or fabric dependent mode where drivers make use of native fabric memory transfer mechanisms (RDMA, …) to transfer commands and data.

12679485_245179519150700_14553389_nA (Linux) host driver for NVMeoF is currently available from Seagate. And as a result, support for NVMeoF for Linux is currently under development, and  not far from release in the next Kernel (I think). (Mellanox has a tutorial on how to compile a Linux kernel with NVMeoF driver support).

With Linux coming out, Microsoft Windows and VMware can’t be far behind. However, I could find nothing online, aside from base NVMe support, for either platform.

NVMeoF target support is another matter but with NICs/HBAs & switch hardware/firmware and drivers presently available, proprietary storage system target drivers are just a matter of time.

Boot support is a major concern. I could find no information on BIOS support for booting off of a NVMeoF AFA. Arguably, one may not need boot support for NVMeoF AFAs as they are probably not a viable target for storing App code or OS software.

From what I could tell, normal fabric multi-pathing support should work fine with NVMeoF. This should allow for HA NVMeoF storage, a critical requirement for enterprise AFA storage systems these days.

NVMeoF advantages/disadvantages

Chelsio and others have shown that NVMeoF adds ~8μsec of additional overhead beyond native NVMe SSDs, which if true would warrant implementation on all NVMe AFAs. This may or may not impact max IOPS depending on scale-ability of NVMeoF.

For instance, servers (PCIe bus hardware) typically limit the number of private NVMe SSDs to 255 or less. With an NVMeoF, one could potentially have 1000s of shared NVMe SSDs accessible to a single server. With this scale, one could have a single server attached to a scale-out NVMeoF AFA (cluster) that could supply ~4X the IOPS that a single server could perform using private NVMe storage.

Base level NVMe SSD support and protocol stacks are starting to be available for most flash vendors and operating systems such as, Linux, FreeBSD, VMware, Windows, and Solaris. If Intel’s claim of 90% common software between NVMe and NVMeoF drivers is true, then it should be a relatively easy development project to provide host NVMeoF drivers.

The need for special Ethernet hardware that supports RDMA may delay some storage vendors from implementing NVMeoF AFAs quickly. The lack of BIOS boot support may be a minor irritant in comparison.

NVMeoF forecast

AFA storage systems, as far as I can tell, are all about selling high IOPS and very-low latency IOs. It would seem that NVMeoF would offer early adopter AFA storage vendors a significant performance advantage over slower paced competition.

In previous QoM/QoW posts we have established that there are about 13 new enterprise storage systems that come out each year. Probably 80% of these will be AFA, given the current market environment.

Of the 10.4 AFA systems coming out over the next year, ~20% of these systems pride themselves on being the lowest latency solutions in the market, and thus command high margins. One would think these systems would be the first to adopt NVMeoF. But, most of these systems have their own, proprietary flash modules and do not use standard (NVMe) SSDs and can use their own proprietary interface to their proprietary flash storage. This will delay any implementation for them until they can convert their flash storage to NVMe which may take some time.

On the other hand, most (70%) of the other AFA systems, that currently use SAS/SATA SSDs, could boost their IOP counts and drastically reduce their IO  response times, by implementing NVMe SSDs and NVMeoF. But converting SAS/SATA backends to NVMe will take time and effort.

But, there are a select few (~10%) of AFA systems, that already use NVMe SSDs in their AFAs, and for these few, they would seem to have a fast track towards implementing NVMeoF. The fact that NVMeoF is supported over all fabrics and all storage interface protocols make it even easier.

Moreover, NVMeoF has been under discussion since the summer of 2015, which tells me that astute AFA vendors have already had 18+ months to develop it. With NVMeoF host drivers & hardware available since Dec. 2015, means hardware and software exist to test and validate against.

I believe that NVMeoF will be GA’d within the next 12 months by at least one enterprise AFA system. So my QoM1610 forecast for NVMeoF is YES, with a 0.83 probability.

Comments?

 

 

 

What’s next for Nexenta

We talked with Nexenta at Storage Field Day 6 where they discussed their current and future software defined storage solutions. I highly encourage you to see the SFD6 videos of their sessions if you want to learn more about them.

Nexenta was an earlier adopter of software defined storage and have recently signed with Solinea to support Nexenta under OpenStack CINDER block storage. Nexenta is based on ZFS and supports inline deduplication and advanced performance functionality.

NexentaStor™

NexentaStor™ is there base storage software and comes as a download in both an Enterprise edition and Community edition. NexentaStor can run on most industry standard, x86 server platforms.

  • The Community edition supports up to 18TB and uses DAS and/or SAS connected storage to supply NFS and SMB file services.
  • The Enterprise edition extends capacity into the PB and supports FC and iSCSI block storage services as well as file services. The Enterprise edition supports plugins for HA solutions and storage replication.

Nexenta mentioned that they had over 6500 customers for NexentaStor of which 1500 are cloud service providers. But they have a whole lot more to offer than just NexentaStor including NexentaConnect™ and coming soon, NexentaEdge™ and NexentaFusion™.

NexentaConnect™

NexentaConnect software works with VMware or Citrix solutions to provide advanced storage services, such as file services, IO acceleration, and storage automation/analytics. There are three products in the NexentaConnect family:

  • NexentaConnect for VMware Virtual SAN – by combining NexentaConnect together with VMware Virtual SAN software and DAS or SAS storage one can offer NFS and SMB/CIFS file services.  Prior to NexentaConnect, VMware Virtual SAN storage only provided VMware dedicated SAN storage, but now that same infrastructure can be used for any NFS or SMB/CIFS file system storage.
  • NexentaConnect for VMware Horizon – by combining NexentaConnect with VMware Horizon and DAS plus local SSD storage, one can provide accelerated virtual desktop IO with state of the art write logging, inline deduplication, and GUI based storage automation/analytics.
  • NexentaConnect for Citrix XenDesktop (in Beta now) by combining NexentaConnect with Citrix XenDesktop software and DAS plus local SSD storage, one can accelerate XenDesktop IO and ease the management of XenDesktop storage.

Nexenta has teamed up with Dell to offer Dell-Nexenta (and VMware) storage solution using NexentaConnect and VMware Virtual SAN software on Dell hardware.

NexentaEdge™

They spent a lot of time on NexentaEdge and what they plan to offer is a software defined object storage solution. Most object storage systems on the market either started as software only or currently support a software only version. But Nexenta is the first to come at it from a file services heritage that I know of.

NexentaEdge will offer iSCSI services as well as standard object storage services such as Amazon S3 and OpenStack SWIFT. Their solution splits up objects into chunks and replicates/distributes the object chunks across their software defined (object) storage cluster.

Cluster communications uses UDP (not TCP) and so has less overhead. NexentaEdge cluster communications uses their own Replicast protocol to send messages and data out across the cluster. .

They designed NexentaEdge to be able to support Shingle Magnetic Recording (SMR) disks which are very dense storage but occasionally have to go “away” while they perform  garbage collection/re-organization. I did two posts about SMR disks a while back (see Shingled magnetic recording disks and Sequential-only disk for more information on SMR).

I have to admit I had a BIG problem with support for iSCSI over eventually consistent storage. I don’t see how this can be used to support ACID database requests but I suppose Nexenta would argue that anyone using object storage for ACID database IO needs to have their head examined.

NexentaFusion™

Although this was not discussed as much, NexentaFusion is another future offering supplying software defined storage analytics and orchestration automation. They intent is to use NexentaFusion with NexentaStor, NexentaConnect and/or NexentaEdge. As you scale up your Nexenta storage cluster, automation/orchestration and storage analytics starts to become a more pressing need. According to Nexenta’s website NexentaFusion 1.0 will support multi-tennant storage monitoring and real time storage analytics while NexentaFusion 2.0 will supportstorage provisioning and orchestration.

~~~~

Nexenta provided Converse all-star shoes to all the participants as well as pens and notebooks. I had to admit I liked the look of the new tennis shoes but my wife and kids thought I was crazy.

Different views on Nexenta from the other SFD6 bloggers can be found below:

SFD6 – Day 2 – Nexenta from PenguinPunk (Dan Firth, @PenguinPunk)

Nexenta – Back in da house by Nigel Poulton (@NigelPoulton)

Sorry Nexenta, but I don’t get it … and questions arise by Juku (Enrico Signoretti, @ESignoretti)

Day 2 at SFD6: Nexenta by Absolutely Windows (John Obeto, @JohnObeto)

SNWUSA Spring 2013 summary

SNWUSA, SNIA, partyFor starters the parties were a bit more subdued this year although I heard Wayne’s suite was hopping to 4am last night (not that I would ever be there that late).

And a trend seen the past couple of years was even more evident this year, many large vendors and vendor spokespeople went missing. I heard that there were a lot more analyst presentations this SNW than prior ones although it was hard to quantify.  But it did seem that the analyst community was pulling double duty in presentations.

I would say that SNW still provides a good venue for storage marketing across all verticals. But these days many large vendors find success elsewhere, leaving SNW Expo mostly to smaller vendors and niche products.  Nonetheless, there were a\ a few big vendors (Dell, Oracle and HP) still in evidence. But EMC, HDS, IBM and NetApp were not   showing on the floor.

I would have to say the theme for this years SNW was hybrid storage. It seemed last fall the products that impressed me were either cloud storage gateways or all flash arrays but this year there weren’t as many of these at the show but hybrid storage certainly has arrived.

Best hybrid storage array of the show

It’s hard to pick just one hybrid storage vendor as my top pick, especially since there were at least 3 others talking to me under NDA, but from my perspective the Hybrid vendor of the show had to be Tegile (pronounced I think, as te’-jile). They seemed to have a fully functional system with snapshot, thin provisioning, deduplication and pretty good VMware support (only time I have heard a vendor talk about VASA “stun” support for thin provisioned volumes).

They made the statement that SSD in their system is used as a cache, not a tier. This use is similar to NetApp’s FlashCache and has proven to be a particularly well performing approach to use of hybrid storage. (For more information on that take a look at some of my NFS and recent SPC-1 benchmark review dispatches. How well this is integrated with their home grown dedupe logic is another question.

On the negative side, they seem to be lacking a true HA/dual controller version but could use two separate systems with synch (I think) replication between them to cover this ground?? They also claimed their dedupe had no performance penalty, a pretty bold claim that cries out for some serious lab validation and/or benchmarking to prove. They also offer an all flash version of their storage (but then how can it be used as a cache?).

The marketing team seemed pretty knowledgeable about the market space and they seem to be going after mid-range storage space.

The product supports file (NFS and CIFS/SMB), iSCSI and FC with GigE, 10GbE and 8Gbps FC. They quote “effective” capacities with dedupe enabled but it can be disabled on a volume basis.

Overall, I was impressed by their marketing and the product (what little I saw).

Best storage tool of the show

Moving onto other product categories, it was hard to see anything that caught my eye. Perhaps I have just been to too many storage conferences but I did get somewhat excited when I looked at SwiftTest.  Essentially they offer a application profiling, storage modeling, workload generating tool set.

The team seems to be branching out of their traditional vendor market focus and going after large service providers and F100 companies with large storage requirements.

Way back, when I was in Engineering, we were always looking for some information as to how customers actually used storage products. Well what SwiftTest has, is an appliance to instrument your application environment (through network taps/hardware port connections) to monitor your storage IO and create a statistical operational profile of your I/O environment. Then take that profile and play it against a storage configuration model to show how well it’s likely to perform.  And if that’s not enough the same appliance can be used to drive a simulated version of the operational profile back onto a storage system.

It offers NFS (v2,v3, v4) CIFS/SMB (SMB1, SMB2, SMB3) FC, iSCSI, and HTTP/REST (what no FCoe?). They mentioned an $8oK price tag for the base appliance (one protocol?) but grows up pretty fast, if you want all of them.  They also seem to have three levels of appliances (my guess more performance and more protocols come with the bigger boxes).

Not sure where they top out but simulating an operational profile can be quite complex especially when you have to be able to control data patterns to match deduplication potential in customer data, drive markov chains with probability representations of operational profiles, and actually execute IO operations. They said something about their ports have dedicated CPU cores to insure adequate performance or something similar but still it seems to little to hit high IO workloads.

Like I said, when I was in engineering were searching for this type of solution back in the late 90s and we would have probably bought it in a moment, if it was available.

GoDaddy.com, the domain/web site services provider was one of their customers that used the appliance to test storage configurations. They presented at SNW on some of their results but I missed their session (the case study is available on SwiftTests website).

~~~~

In short, SNW had a diverse mixture of end user customers but lacked a full complement of vendors to show off to them.   The ratio of vendors to customers has definitely shifted to end-users the last couple of years and if anything has gotten more skewed to end-users, (which paradoxically should appeal to more storage vendors?!).

I talked with lots of end-users, from companies like FedEx, Northrop Grumman and AOL to name just a few big ones. But there were plenty of smaller ones as well.

The show lasted three days and had sessions scheduled all the way to the end. I was surprised at the length and the fact that it started on Tuesday rather than Monday as in years past.  Apparently, SNIA and Computerworld are still tweaking the formula.

It seemed to me there were more cancelled sessions than in years past but again this was hard to quantify.

Some of the customers I talked with thought SNW should go to a once a year and can’t understand why it’s still twice a year.  Many mentioned VMworld as having taken the place of SNW in being a showplace for storage vendors of all sizes and styles.  That and the vendor specific shows from EMC, IBM, Dell and others.

The fall show is moving to Long Beach, CA. Probably, a further experiment to find a formula that works.  Let’s hope they succeed.

Comments?

 

Fall SNWUSA wrap-up

Attended SNWUSA this week in San Jose,  It’s hard to see the show gradually change when you attend each one but it does seem that the end-user content and attendance is increasing proportionally.  This should bode well for future SNWs. Although, there was always a good number of end users at the show but the bulk of the attendees in the past were from storage vendors.

Another large storage vendor dropped their sponsorship.  HDS no longer sponsors the show and the last large vendor still standing at the show is HP.  Some of this is cyclical, perhaps the large vendors will come back for the spring show, next year in Orlando, Fl.  But EMC, NetApp and IBM seemed to have pretty much dropped sponsorship for the last couple of shows at least.

SSD startup of the show

Skyhawk hardware (c) 2012 Skyera, all rights reserved (from their website)
Skyhawk hardware (c) 2012 Skyera, all rights reserved (from their website)

The best, new SSD startup had to be Skyera. A 48TB raw flash dual controller system supporting iSCSI block protocol and using real commercial grade MLC.  The team at Skyera seem to be all ex-SandForce executives and technical people.

Skyera’s team have designed a 1U box called the Skyhawk, with  a phalanx of NAND chips, there own controller(s) and other logic as well. They support software compression and deduplication as well as a special designed RAID logic that claims to reduce extraneous write’s to something just over 1 for  RAID 6, dual drive failure equivalent protection.

Skyera’s underlying belief is that just as consumer HDAs took over from the big monster 14″ and 11″ disk drives in the 90’s sooner or later commercial NAND will take over from eMLC and SLC.  And if one elects to stay with the eMLC and SLC technology you are destined to be one to two technology nodes behind. That is, commercial MLC (in USB sticks, SD cards etc) is currently manufactured with 19nm technology.  The EMLC and SLC NAND technology is back at 24 or 25nm technology.  But 80-90% of the NAND market is being driven by commercial MLC NAND.  Skyera came out this past August.

Coming in second place was Arkologic an all flash NAS box using SSD drives from multiple vendors. In their case a fully populated rack holds about 192TB (raw?) with an active-passive controller configuration.  The main concern I have with this product is that all their metadata is held in UPS backed DRAM (??) and they have up to 128GB of DRAM in the controller.

Arkologic’s main differentiation is supporting QOS on a file system basis and having some connection with a NIC vendor that can provide end to end QOS.  The other thing they have is a new RAID-AS which is special designed for Flash.

I just hope their USP is pretty hefty and they don’t sell it someplace where power is very flaky, because when that UPS gives out, kiss your data goodbye as your metadata is held nowhere else – at least that’s what they told me.

Cloud storage startup of the show

There was more cloud stuff going on at the show. Talked to at least three or four cloud gateway providers.  But the cloud startup of the show had to be Egnyte.  They supply storage services that span cloud storage and on premises  storage with an in band or out-of-band solution and provide file synchronization services for file sharing across multiple locations.  They have some hooks into NetApp and other major storage vendor products that allows them to be out-of-band for these environments but would need to be inband for other storage systems.  Seems an interesting solution that if succesful may help accelerate the adoption of cloud storage in the enterprise, as it makes transparent whether storage you access is local or in the cloud. How they deal with the response time differences is another question.

Different idea startup of the show

The new technology showplace had a bunch of vendors some I had never heard of before but one that caught my eye was Actifio. They were at VMworld but I never got time to stop by.  They seem to be taking another shot at storage virtualization. Only in this case rather than focusing on non-disruptive file migration they are taking on the task of doing a better job of point in time copies for iSCSI and FC attached storage.

I assume they are in the middle of the data path in order to do this and they seem to be using copy-on-write technology for point-in-time snapshots.  Not sure where this fits, but I suspect SME and maybe up to mid-range.

Most enterprise vendors have solved these problems a long time ago but at the low end, it’s a little more variable.  I wish them luck but although most customers use snapshots if their storage has it, those that don’t, seem unable to understand what they are missing.  And then there’s the matter of being in the data path?!

~~~~

If there was a hybrid startup at the show I must have missed them. Did talk with Nimble Storage and they seem to be firing on all cylinders.  Maybe someday we can do a deep dive on their technology.  Tintri was there as well in the new technology showcase and we talked with them earlier this year at Storage Tech Field Day.

The big news at the show was Microsoft purchasing StorSimple a cloud storage gateway/cache.  Apparently StorSimple did a majority of their business with Microsoft’s Azure cloud storage and it seemed to make sense to everyone.

The SNIA suite was hopping as usual and the venue seemed to work well.  Although I would say the exhibit floor and lab area was a bit to big. But everything else seemed to work out fine.

On Wednesday, the CIO from Dish talked about what it took to completely transform their IT environment from a management and leadership perspective.  Seemed like an awful big risk but they were able to pull it off.

All in all, SNW is still a great show to learn about storage technology at least from an end-user perspective.  I just wish some more large vendors would return once again, but alas that seems to be a dream for now.

Latest ESRP results for 1K and under mailboxes – chart of the month

SCIESRP120724(004) (c) 2012 Silverton Consulting, All Rights Reserved

The above chart was from our July newsletter Exchange Solution Reviewed Program (ESRP) performance analysis for 1000 and under mailbox submissions. I have always liked response times as they seem to be mostly the result of tight engineering, coding and/or system architecture.  Exchange response times represent a composite of how long it takes to do a database transaction (whether read, write or log write).  Latencies are measured at the application (Jetstress) level.

On the chart we show the top 10 data base read response times for this class of storage.  We assume that DB reads are a bit more important than writes or log activity but they are all probably important.  As such,  we also show the response times for DB writes and log writes but the ranking is based on DB reads alone.

In the chart above, I am struck by the variability in write and log write performance.  Writes range anywhere from ~8.6 down to almost 1 msec. The extreme variability here begs a bunch of questions.  My guess is the wide variability probably signals something about caching, whether it’s cache size, cache sophistication or drive destage effectiveness is hard to say.

Why EMC seems to dominate DB read latency in this class of storage is also interesting. EMC’s Celerra NX4, VNXe3100, CLARiiON CX4-120, CLARiiON AX4-5i, Iomega ix12-300 and VNXe3300 placed in the top 6 slots, respectively.  They all had a handful of disks (4 to 8), mostly 600GB or larger and used iSCSI to access the storage.  It’s possible that EMC has a great iSCSI stack, better NICs or just better IO scheduling. In any case, they have done well here at least with read database latencies.  However, their write and log latency was not nearly as good.

We like ESRP because it simulates a real application that’s pervasive in the enterprise today, i.e., email.  As such, it’s less subject to gaming, and typically shows a truer picture of multi-faceted storage performance.

~~~~

The complete ESRP performance report with more top 10 charts went out in SCI’s July newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well).  However, you can get the ESRP performance analysis now and subscribe to future free newsletters by just using the signup form above right.

For a more extensive discussion of current SAN block system storage performance covering SPC (Top 30) results as well as ESRP results with our new ChampionsChart™ for SAN storage systems, please see SCI’s SAN Storage Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.


Latest ESRP 1K-5K mailbox DB xfers/sec/disk results – chart-of-the-month

(SCIESRP120429-001) 2012 (c) Silverton Consulting, All Rights Reserved

The above chart is from our April newsletter on Microsoft Exchange 2010 Solution Reviewed Program (ESRP) results for the 1,000 (actually 1001) to 5,000 mailbox category.  We have taken the database transfers per second, normalized them for the number of disk spindles used in the run and plotted the top 10 in the chart above.

A couple of caveats first, we chart disk-only systems in this and similar charts  on disk spindle performance. Although, it probably doesn’t matter as much at this mid-range level, for other categories SSD or Flash Cache can be used to support much higher performance on a per spindle performance measure like the above.  As such, submissions with SSDs or flash cache are strictly eliminated from these spindle level performance analysis.

Another caveat, specific to this chart is that ESRP database transaction rates are somewhat driven by Jetstress parameters (specifically simulated IO rate) used during the run.  For this mid-level category, this parameter can range from a low of 0.10 to a high of 0.60 simulated IO operations per second with a median of ~0.19.  But what I find very interesting is that in the plot above we have both the lowest rate (0.10 in #6, Dell PowerEdge R510 1.5Kmbox) and the highest (0.60 for #9, HP P2000 G3 10GbE iSCSI MSA 3.2Kmbx).  So that doesn’t seem to matter much on this per spindle metric.

That being said, I always find it interesting that the database transactions per second per disk spindle varies so widely in ESRP results.  To me this says that storage subsystem technology, firmware and other characteristics can still make a significant difference in storage performance, at least in Exchange 2010 solutions.

Often we see spindle count and storage performance as highly correlated. This is definitely not the fact for mid-range ESRP (although that’s a different chart than the one above).

Next, we see disk speed (RPM) can have a high impact on storage performance especially for OLTP type workloads that look somewhat like Exchange.  However, in the above chart the middle 4 and last one (#4-7 & 10) used 10Krpm (#4,5) or slower disks.  It’s clear that disk speed doesn’t seem to impact Exchange database transactions per second per spindle either.

Thus, I am left with my original thesis that storage subsystem design and functionality can make a big difference in storage performance, especially for ESRP in this mid-level category.  The range in the top 10 contenders spanning from ~35 (Dell PowerEdge R510) to ~110 (Dell EqualLogic PS Server) speaks volumes on this issue or a multiple of over 3X from top to bottom performance on this measure.  In fact, the overall range (not shown in the chart above spans from ~3 to ~110 which is a factor of almost 37 times from worst to best performer.

Comments?

~~~~

The full ESRP 1K-5Kmailbox performance report went out in SCI’s April newsletter.  But a copy of the full report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the full SPC performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current SAN or block storage performance covering SPC-1 (top 30)SPC-2 (top 30) and all three levels of ESRP (top 20) results please see SCI’s SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions or comments on how to improve our analysis of ESRP results or any of our other storage performance analyses.