Bar chart depicting IOPS/GB-NAND, #1 is Datacore Parallel Server with ~266 IOPS/GB-NAND,

SPC-1 IOPS performance per GB-NAND – chart of the month

Bar chart depicting IOPS/GB-NAND, #1 is Datacore Parallel Server with ~266 IOPS/GB-NAND,
(c) 2016 Silverton Consulting, All Rights Reserved

The above is an updated chart from last months SCI newsletter StorInt™ SPC Performance Report depicting the top 10 SPC-1 submissions IOPS™ per GB-NAND. We have been searching for a while now how to depict storage system effectiveness when using SSD or other flash storage. We have used IOPS/SSD in the past but IOPS/GB-NAND looks better.

Calculating IOPS/GB-NAND

SPC-1 does not report this metric but it can be calculated by dividing IOPS by NAND storage capacity. One can find out NAND storage capacity by looking over SPC-1 full disclosure reports (FDR), totaling up the NAND storage in the configuration in all the SSDs and flash devices. This is total NAND capacity, not Total ASU (used storage) Capacity. GB-NAND reflects just what’s indicated for SSD/flash device capacity in the configuration section. This is not necessarily the device’s physical NAND capacity when over provisioned, but at least it’s available in the FDR.

DataCore Parallel Server IOPS/GB-NAND explained

The DataCore Parallel Server generated over 5M IOPS (IO’s/second) under an SPC-1 (OLTP-like) workload. And with their 54-480GB SSDs, totaling ~25.9TB of NAND capacity, it gives them just under 200 IOPS/GB-NAND. The chart in the original report was incorrect.  There we used 36-480GB SSDs or ~17.3TB of NAND to compute IOPS/GB-NAND, which gave them just under 300 IOPS/GB-NAND in the report, which was incorrect. (The full report has been since corrected and is available for re-download for subscribers to our newsletter).

The 480GB (Samsung SM863 MZ-7KM480E)SSDs were all SATA attached. Samsung lists these SSDs as V-NAND, MLC drives, rated at 97K random Reads and 26K random writes. At over 5M IOPS, it should be running close to 100% of the SSDs rated performance. However, DataCore’s Parallel Server included 2 controllers with a total of 3TB of DRAM cache,  which was then SAS connected to 4 DELL MD1220 storage arrays, each with 512GB of DRAM cache, so their total configuration had about 5TB of DRAM in it, most of which would have been used as a IO cache.

The SPC-1 submission only used 11.8TB (Total ASU capacity) of storage. All the DRAM cache help to explain how they attained 5M IOPS. Having a multi-tiered cache like DataCore-MD1220 configuration, doesn’t insure that all the cache is effectively used but even without cache tiering logic, there might not be much of an overlap between the MD1220 and Parallel Server caches. It would be more interesting to see how busy the SSDs were during this SPC-1 run.

How random the SPC-1 workload is, is subject to much speculation in the industry. Suffice it to say it’s not 100% random, but what is. Non-random OLTP workloads would tend to favor larger caches.

SPC is coming out with a new version of their benchmark with supplementary information which may shed more light on device busyness.

All SPC-1 benchmark submissions are available at storageperformance.org.

Want more?

The August 2016 and our other SPC Performance reports have much more information on SPC-1 and SPC-2 performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is included in SCI’s SAN-NAS Buying Guidealso available from our website .

~~~~

The complete SPC performance report went out in SCI’s August 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

 

DDN unchains Wolfcreek, unleashes IME and updates WOS

16371098088_3b264f5844_zIt’s not every day that we get a vendor claiming 2.5X the top SPC-1 IOPS (currently held by Hitachi G1000 VSP all flash array at ~2M IOPS) as DataDirect Networks (DDN) has claimed for an all-flash version of their new Wolfcreek hyper converged appliance. DDN says their new 4U appliance is capable of 60GB/sec of throughput and over 5M IOPS. (See their press release for more information.) Unclear if these are SPC-1 IOPS or not, but I haven’t seen any SPC-1 report on it yet.

In addition to the new Wolfcreek appliance, DDN announced their new Infinite Memory Engine™ (IME) flash caching software and WOS® 360 V2.0, an enhanced version of their object storage.

DDN if you haven’t heard of them has done well in the Web 2.0 environments and is a leading supplier to high performance computing (HPC) sites. They have object storage system (WOS), all flash block storage (SFA12KXi), hybrid (disk-SSD) block storage (SFA7700X™ & SFA12KX™), Lustre file appliance (EXAScaler), IBM GPFS™ NAS appliance (GRIDScaler), media server appliance (MEDIAScaler™) and  software defined storage (Storage Fusion Accelerator [SFX™] flash caching software).

Wolfcreek hyper converged appliance

The converged solution comes in a 4U appliance using dual Haswell Intel microprocessors (with up to 18 cores each), includes a PCIe fabric which supports 48-NVMe flash cards or 72-SFF SSDs. With the NVMe or SSDs, Wolfcreek will be using their new IME software to accelerate IO activity.

Wolfcreek IME software supports either burst mode IO caching cluster or a storage cluster of nodes. I assume burst mode is a storage caching layer for backend file system stoorage. As a storage cluster I assume this would include some of their scale-out file system software on the nodes. Wolfcreek cluster interconnect is 40Gb Infiniband or 10/40Gb Ethernet and also will support Intel’s Omni-Path. The Wolfcreek appliance is compatible with HPC Lustre and IBM GPFS scale out file systems.

Wolfcreek appliance can be a great platform for OpenStack and Hadoop environments. But it also supports virtual machine hypervisors from VMware, Citrix and Microsoft. DDN says that the Wolfcreek appliance can scale up to support 100K VMs. I’ve been told that IME will not be targeted to work with Hypervisors in the first release.

Recall that with a hyper converged appliance, some portion of the system resources (memory and CPU cores) must be devoted to server and VM application activities and the remainder to storage activity. How this is divided up and whether this split is dynamic (changes over time) or static (fixed over time) in the Wolfcreek appliance is not indicated.

The hyper converged field is getting crowded of late what with VMware EVO:RAIL, Nutanix, ScaleComputing, Simplivity and others coming out with solutions. But there aren’t many that support all-flash storage and it seems unusual that hyper converged customers would have need for that much IO performance. But I could be wrong, especially for HPC customers.

There’s much more to hyper convergence than just having storage and compute in the same node. The software that links it all together, manages, monitors and deploys these combined hypervisor, storage and server systems is almost as important as any of the  hardware. There wasn’t much talk about the software that DDN is putting together for Wolfcreek but it’s still early yet. With their roots in HPC, it’s likely that any DDN hyper converged solution will target this market first and broaden out from there.

Infinite Memory Engine (IME)

IME is an outgrowth of DDN’s SFX software and seem to act as a caching layer for parallel file system IO. It makes use of NVMe or SSDs for its IO caching. And according to DDN can offer up to 1000X IO acceleration to storage or 100X file system acceleration.

It does this primarily by providing an application aware IO caching layer and supplying more effective IO to the file system layer using PCIe NVMe or SSD flash storage for hardware IO acceleration. According to the information provided by DDN, IME can provide 50 GB/sec bandwidth to a host compute cluster while only doing 4GB/sec of throughput to a backend file system, presumably by better caching of file IO.

WOS 360 V2.0

The new WOS 360 V2.0 object storage system features include

  • Higher density storage package with 98-8TB SATA drives or 768TB raw capacity in 4U) supporting 8B objects each and over 100B objects in a cluster.
  • Native SWIFT API support for OpenStack environments  which includes gateway or embedded deployments, up to 5000 concurrent users and 5B objects/namespace.
  • Global ObjectAssure data encoding with lower storage overhead (1.5x or a 20% reduction from their previous encoding option) for highly durable and available object storage usiing a two level hierarchical erasure code for object storage.
  • Enhanced network security with SSL  which provides end-to-end SSL network data transport between clients and WOS and betweenWOS storage nodes.
  • Simplified cluster installation, deployment and maintenance with can now deploy a WOS cluster in minutes, with a simple point and click GUI for installation and cluster deployment with automated non-disruptive software upgrade.
  • Performance improvements for better video streaming, content distribution and large file transfers with improved QoS for latency sensitive applications.

~~~~

Probably more going on with DDN than covered here but this hits the highlights. I wish there was more on their Wolfcreek appliance and its various configurations and performance benchmarks but there’s not.

Comments?

 Photo Credits: wolf-63503+1920 by _Liquid

 

All flash storage performance testing

There are some serious problems with measuring IO performance of all flash arrays with what we use on disk storage systems. Mostly, these are due to the inherent differences between current flash- and disk-based storage.

NAND garbage collection

First off, garbage collection is required by any SSD or NAND storage to be able to write data. Garbage collection coalesces free space by moving non-modified data to new pages/blocks and freeing up the space held by old, no-longer current data.

The problem is NAND garbage collection takes place only after a suitable amount of write activity and measuring all-flash array storage system performance without taking into account garbage collection is misleading at best and dishonest at worse.

The only way to control for garbage collection is to write lots of data to a all-flash storage system and measure its performance over a protracted period of time. How long this takes is dependent on the amount of storage in an all flash array but filling it up to 75% of its capacity and then measuring IO performance as you fill up another 10-15% of its capacity with new data should suffice. Of course this would all have to be done consecutively, without any time off between runs (which would allow garbage collection to sneak in).

Flash data reduction

Second, many all flash arrays offer data reduction like data compression or deduplication. Standard IO benchmarks today don’t control for data reduction.

What we need is a standard corpus of reducible data for an IO workload. Such data would need to be able to be data compressed and data deduplicated. Unclear where such a data corpus could be found but one is needed to properly measure all flash system performance. What would help is some real world data reduction statistics, from a large number of customer installations that could help identify what real-world dedup and compression ratios look like. Then we could use these statistics to construct a suitable data load that can then be scaled and tailored to required performance needs.

Perhaps SNIA or maybe a big (government) customer could support the creation of this data corpus that can be used for “standard” performance testing. With real world statistics and a suitable data corpus, standard IO benchmarks could control for data reduction on flash arrays and better measure system performance.

Block IO differences

Third, block heat maps (access patterns) need to become much more realistic. For disk based systems it was important to randomize IO stream to minimize the advantage of DRAM caching. But with all flash storage arrays, cache is less useful and because flash can’t be rewritten in place, having IO occur to the same block (especially overwrites) causes NAND page fragmentation and more NAND write overhead.

~~~~

Only by controlling for garbage collection, using a standard, data reducible data load and returning to a cache friendly (or at least write cache friendly) workload we will truly understand all flash storage performance.

Comments?

Thanks to Larry Freeman (@Larry_Freeman) for the idea for today’s post.

Photo Credit(s): Race Faces by Jerome Rauckman

Has latency become the key metric? SPC-1 LRT results – chart of the month

I was at EMCworld a couple of months back and they were showing off a preview of the next version VNX storage, which was trying to achieve a million IOPS with under a millisecond latency.  Then I attended NetApp’s analyst summit and the discussion at their Flash seminar was how latency was changing the landscape of data storage and how flash latencies were going to enable totally new applications.

One executive at NetApp mentioned that IOPS was never the real problem. As an example, he mentioned one large oil & gas firm that had a peak IOPS of 35K.

Also, there was some discussion at NetApp of trying to come up with a way of segmenting customer applications by latency requirements.  Aside from high frequency trading applications, online payment processing and a few other high-performance database activities, there wasn’t a lot that could easily be identified/quantified today.

IO latencies have been coming down for years now. Sophisticated disk only storage systems have been lowering latencies for over a decade or more.   But since the introduction of SSDs it’s been a whole new ballgame.  For proof all one has to do is examine the top 10 SPC-1 LRT (least response time, measured with workloads@10% of peak activity) results.

Top 10 SPC-1 LRT results, SSD system response times

 

In looking over the top 10 SPC-1 LRT benchmarks (see Figure above) one can see a general pattern.  These systems mostly use SSD or flash storage except for TMS-400, TMS 320 (IBM FlashSystems) and Kaminario’s K2-D which primarily use DRAM storage and backup storage.

Hybrid disk-flash systems seem to start with an LRT of around 0.9 msec (not on the chart above).  These can be found with DotHill, NetApp, and IBM.

Similarly, you almost have to get to as “slow” as 0.93 msec. before you can find any disk only storage systems. But most disk only storage comes with a latency at 1msec or more. Between 1 and 2msec. LRT we see storage from EMC, HDS, HP, Fujitsu, IBM NetApp and others.

There was a time when the storage world was convinced that to get really good response times you had to have a purpose built storage system like TMS or Kaminario or stripped down functionality like IBM’s Power 595.  But it seems that the general purpose HDS HUS, IBM Storwize, and even Huawei OceanStore are all capable of providing excellent latencies with all SSD storage behind them. And all seem to do at least in the same ballpark as the purpose built, TMS RAMSAN-620 SSD storage system.  These general purpose storage systems have just about every advanced feature imaginable with the exception of mainframe attach.

It seems nowadays that there is a trifurcation of latency results going on, based on underlying storage:

  • DRAM only systems at 0.4 msec to ~0.1 msec.
  • SSD/flash only storage at 0.7 down to 0.2msec
  • Disk only storage at 0.93msec and above.

The hybrid storage systems are attempting to mix the economics of disk with the speed of flash storage and seem to be contending with all these single technology, storage solutions. 

It’s a new IO latency world today.  SSD only storage systems are now available from every major storage vendor and many of them are showing pretty impressive latencies.  Now with fully functional storage latency below 0.5msec., what’s the next hurdle for IT.

Comments?

Image: EAB 2006 by TMWolf

 

Enhanced by Zemanta

Top Ten RayOnStorage Posts for 2012

Here are the top 10 blog posts for 2012 from RayOnStorage.com

1. Snow Leopard to Mountain Lion

We discuss our Mac OSX transition from Snow Leopard to Mountain Lion with the good, bad and ugly of Mountain Lion from a novice user’s perspective.

2. Vsphere 5.1 storage enhancements and future vision

We detail some of the storage enhancements and directions for the latest revision of VMware Vsphere 5.1

3.  Object Storage Summit wrap up

We discuss last months ExecEvent Object Storage Summit and some of the use cases driving customers to adopt object storage for their data centers.

4. EMCWorld2012 part 1 – VNX/VNXe

We analyze the first day of EMCWorld2012 focused on EMC’s VNX/VNXe product enhancements.

5. Dell Storage Forum 2012 – day 2

We discuss the new Compellent and FluidFS systems coming out of Dell Storage Forum and their latest RNA Networks acquisition with a coherent Flash Cache network.

6. EMC buys ExtremeIO

Right before EMCWorld2012, EMC announced their purchase of ExtremeIO which was rumored for sometime but signaled a new path to flash only SAN storage systems.

7. HDS Influencer Summit wrap up

HDS held their Influencer Summit last month and rolled out their executive team to talk about their storage and service directions and successes.

8. Oracle finally releases StorageTek VSM6

Well after much delay we finally get to see the latest generation Virtual Storage Manager 6 (VSM6) for the mainframe System z market place.

9. Coraid, first thoughts

We got to meet with Coraid as part of a Storage TechField Day event and we came away impressed but still wanting to learn more.

10. Latest SPC-1 results IOPS vs. drive counts – chart of the month

Every month (or so) we do a more detailed analysis of a chart that appears in our free monthly newsletter, this was done earlier in the year and documented the correlation between IOPS and drive counts in SPC-1 results.

Happy New Year.

SCI SPC-1 results analysis: Top 10 $/IOPS – chart-of-the-month

Column chart showing the top 10 economically performing systems for SPC-1
(SCISPC120226-003) (c) 2012 Silverton Consulting, Inc. All Rights Reserved

Lower is better on this chart.  I can’t remember the last time we showed this Top 10 $/IOPS™ chart from the Storage Performance Council SPC-1 benchmark.  Recall that we prefer our IOPS/$/GB which factors in subsystem size but this past quarter two new submissions ranked well on this metric.  The two new systems were the all SSD Huawei Symantec Oceanspace™ Dorado2100 (#2) and the latest Fujitsu ETERNUS DX80 S2 storage (#7) subsystems.

Most of the winners on $/IOPS are SSD systems (#1-5 and 10) and most of these were all SSD storage system.  These systems normally have better $/IOPS by hitting high IOPS™ rates for the cost of their storage. But they often submit relatively small systems to SPC-1 reducing system cost and helping them place better on $/IOPS.

On the other hand, some disk only storage do well by abandoning any form of protection as with the two Sun J4400 (#6) and J4200 (#8) storage systems which used RAID 0 but also had smaller capacities, coming in at 2.2TB and 1.2TB, respectively.

The other two disk only storage systems here, the Fujitsu ETERNUS DX80 S2 (#7) and the Huawei Symantec Oceanspace S2600 (#9) systems also had relatively small capacities at 9.7TB and 2.9TB respectively.

The ETERNUS DX80 S2 achieved ~35K IOPS and at a cost of under $80K generated a $2.25 $/IOPS.  Of course, the all SSD systems blow that away, for example the Oceanspace Dorado2100 (#2), all SSD system hit ~100K IOPS but cost nearly $90K for a $0.90 $/IOPS.

Moreover, the largest capacity system here with 23.7TB of storage was the Oracle Sun ZFS (#10) hybrid SSD and disk system which generated ~137K IOPS at a cost of ~$410K hitting just under $3.00 $/IOPS.

Still prefer our own metric on economical performance but each has their flaws.  The SPC-1 $/IOPS metric is dominated by SSD systems and our IOPS/$/GB metric is dominated by disk only systems.   Probably some way to do better on the cost of performance but I have yet to see it.

~~~~

The full SPC performance report went out in SCI’s February newsletter.  But a copy of the full report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the full SPC performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current SAN or block storage performance covering SPC-1 (top 30), SPC-2 (top 30) and ESRP (top 20) results please see SCI’s SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPC results or any of our other storage performance analyses.

 

Latest SPC-1 results – IOPS vs drive counts – chart-of-the-month

Scatter plot of SPC-1  IOPS against Spindle count, with linear regression line showing Y=186.18X + 10227 with R**2=0.96064
(SCISPC111122-004) (c) 2011 Silverton Consulting, All Rights Reserved

[As promised, I am trying to get up-to-date on my performance charts from our monthly newsletters. This one brings us current up through November.]

The above chart plots Storage Performance Council SPC-1 IOPS against spindle count.  On this chart, we have eliminated any SSD systems, systems with drives smaller than 140 GB and any systems with multiple drive sizes.

Alas, the regression coefficient (R**2) of 0.96 tells us that SPC-1 IOPS performance is mainly driven by drive count.  But what’s more interesting here is that as drive counts get higher than say 1000, the variance surrounding the linear regression line widens – implying that system sophistication starts to matter more.

Processing power matters

For instance, if you look at the three systems centered around 2000 drives, they are (from lowest to highest IOPS) 4-node IBM SVC 5.1, 6-node IBM SVC 5.1 and an 8-node HP 3PAR V800 storage system.  This tells us that the more processing (nodes) you throw at an IOPS workload given similar spindle counts, the more efficient it can be.

System sophistication can matter too

The other interesting facet on this chart comes from examining the three systems centered around 250K IOPS that span from ~1150 to ~1500 drives.

  • The 1156 drive system is the latest HDS VSP 8-VSD (virtual storage directors, or processing nodes) running with dynamically (thinly) provisioned volumes – which is the first and only SPC-1 submission using thin provisioning.
  • The 1280 drive system is a (now HP) 3PAR T800 8-node system.
  • The 1536 drive system is an IBM SVC 4.3 8-node storage system.

One would think that thin provisioning would degrade storage performance and maybe it did but without a non-dynamically provisioned HDS VSP benchmark to compare against, it’s hard to tell.  However, the fact that the HDS-VSP performed as well as the other systems did with much lower drive counts seems to tell us that thin provisioning potentially uses hard drives more efficiently than fat provisioning, the 8-VSD HDS VSP is more effective than an 8-node IBM SVC 4.3 and an 8-node (HP) 3PAR T800 systems, or perhaps some combination of these.

~~~~

The full SPC performance report went out to our newsletter subscriber’s last November.  [The one change to this chart from the full report is the date in the chart’s title was wrong and is fixed here].  A copy of the full report will be up on the dispatches page of our website sometime this month (if all goes well). However, you can get performance information now and subscribe to future newsletters to receive these reports even earlier by just sending us an email or using the signup form above right.

For a more extensive discussion of block or SAN storage performance covering SPC-1&-2 (top 30) and ESRP (top 20) results please consider purchasing our recently updated SAN Storage Buying Guide available on our website.

As always, we welcome any suggestions on how to improve our analysis of SPC results or any of our other storage system performance discussions.

Comments?