Bar chart depicting IOPS/GB-NAND, #1 is Datacore Parallel Server with ~266 IOPS/GB-NAND,

SPC-1 IOPS performance per GB-NAND – chart of the month

Bar chart depicting IOPS/GB-NAND, #1 is Datacore Parallel Server with ~266 IOPS/GB-NAND,
(c) 2016 Silverton Consulting, All Rights Reserved

The above is an updated chart from last months SCI newsletter StorInt™ SPC Performance Report depicting the top 10 SPC-1 submissions IOPS™ per GB-NAND. We have been searching for a while now how to depict storage system effectiveness when using SSD or other flash storage. We have used IOPS/SSD in the past but IOPS/GB-NAND looks better.

Calculating IOPS/GB-NAND

SPC-1 does not report this metric but it can be calculated by dividing IOPS by NAND storage capacity. One can find out NAND storage capacity by looking over SPC-1 full disclosure reports (FDR), totaling up the NAND storage in the configuration in all the SSDs and flash devices. This is total NAND capacity, not Total ASU (used storage) Capacity. GB-NAND reflects just what’s indicated for SSD/flash device capacity in the configuration section. This is not necessarily the device’s physical NAND capacity when over provisioned, but at least it’s available in the FDR.

DataCore Parallel Server IOPS/GB-NAND explained

The DataCore Parallel Server generated over 5M IOPS (IO’s/second) under an SPC-1 (OLTP-like) workload. And with their 54-480GB SSDs, totaling ~25.9TB of NAND capacity, it gives them just under 200 IOPS/GB-NAND. The chart in the original report was incorrect.  There we used 36-480GB SSDs or ~17.3TB of NAND to compute IOPS/GB-NAND, which gave them just under 300 IOPS/GB-NAND in the report, which was incorrect. (The full report has been since corrected and is available for re-download for subscribers to our newsletter).

The 480GB (Samsung SM863 MZ-7KM480E)SSDs were all SATA attached. Samsung lists these SSDs as V-NAND, MLC drives, rated at 97K random Reads and 26K random writes. At over 5M IOPS, it should be running close to 100% of the SSDs rated performance. However, DataCore’s Parallel Server included 2 controllers with a total of 3TB of DRAM cache,  which was then SAS connected to 4 DELL MD1220 storage arrays, each with 512GB of DRAM cache, so their total configuration had about 5TB of DRAM in it, most of which would have been used as a IO cache.

The SPC-1 submission only used 11.8TB (Total ASU capacity) of storage. All the DRAM cache help to explain how they attained 5M IOPS. Having a multi-tiered cache like DataCore-MD1220 configuration, doesn’t insure that all the cache is effectively used but even without cache tiering logic, there might not be much of an overlap between the MD1220 and Parallel Server caches. It would be more interesting to see how busy the SSDs were during this SPC-1 run.

How random the SPC-1 workload is, is subject to much speculation in the industry. Suffice it to say it’s not 100% random, but what is. Non-random OLTP workloads would tend to favor larger caches.

SPC is coming out with a new version of their benchmark with supplementary information which may shed more light on device busyness.

All SPC-1 benchmark submissions are available at storageperformance.org.

Want more?

The August 2016 and our other SPC Performance reports have much more information on SPC-1 and SPC-2 performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is included in SCI’s SAN-NAS Buying Guidealso available from our website .

~~~~

The complete SPC performance report went out in SCI’s August 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

 

Dell Storage Forum 2012 – day 2

At the second day of Dell Storage Forum in Boston, they announced:

  • New FluidFS (Exanet) FS8600 front end NAS gateway for Dell Compellent storage. The new gateway can be scaled from 1 to 4 dual controller configurations and can support a single file system/name space of up to 1PB in size. The FS8600 is available with 1GbE or 10GbE options and support 8Gbps FC attachments to backend storage.
  • New Dell Compellent SC8000 controllers based on Dell’s 2U, 12th generation server hardware that can now be cooled with ambient air (115F?) and consumes lower power than previous Series 40 whitebox server controllers. Also the new hardware comes with dual 6-core processors and support 16 to 64GB of DRAM per controller or up to 128GB with dual controllers. The new controllers GA this month, support PCIe slots for backend 6Gbps SAS and frontend connectivity of 1GbE or 10GbE iSCSI, 10GbE FCoE or 8Gbps FC, with 16Gbps FC coming out in 2H2012.
  • New Dell Compellent SC200 and SC220 drive enclosures a 2U 24 SFF drive enclosure or a 2U 12LFF drive enclosure configuration supporting 6Gbps SAS connectivity.
  • New Dell Compellent SC6.0 operating software supporting a 64 bit O/S for larger memory, dual/multi-core processing.
  • New FluidFS FS7600 (1GbE)/FS7610 (10GbE) 12th generation server front end NAS gateways for Dell EqualLogic storage which supports asynchronous replication at the virtual file system level. The new gateways also support 10GbE iSCSI and can be scaled up to 507TB in a single name space.
  • New FluidFS NX3600 (1GbE) /NX3610 (10GbE) 12th generation server front end NAS gateways for PowerVault storage systems which can support up to 576TB of raw capacity for a single gateway or scale to two gateways for up to 1PB of raw storage in a single namespace/file system.
  • Appasure 5 which includes better performance based on a new backend object store to protect even larger datasets. At the moment Appasure is a Windows only solution but with block deduplication/compression and change block tracking is already WAN optimized. Dell announced Linux support will be available later this year.

Probably more interesting was talk and demoing a prototype of their RNA Networks acquisition which supports a cache coherent PCIe SSD cards in Dell servers. The new capability is still on the drawing boards but is intended to connect to Dell Compellent storage and move tier 1 out to the server. Lot’s more to come on this. They call this Project Hermes for the Greek messenger god. Not sure but something about having lightening bolts on his shoes comes to mind…

Comments?

 

20120612-114420.jpg

Services and products, a match made in heaven

wrench rust by HVargas (cc) (from Flickr)
wrench rust by HVargas (cc) (from Flickr)

In all the hoopla about company’s increasing services revenues what seems to be missing is that hardware and software sales automatically drive lots of services revenues.

A recent Wikibon post by Doug Chandler (see Can cloud pull services and technology together …) showed a chart of leading IT companies percent of revenue from services.  The percentages ranged from a high of 57% for  IBM to a low of 12% for Dell, with the median being ~26.5%.

In the beginning, …

It seems to me that services started out being an adjunct to hardware and software sales – i.e., maintenance, help to install the product, provide operational support, etc. Over time, companies like IBM and others went after service offerings as a separate distinct business activity, outside of normal HW and SW sales cycles.

This turned out to be a great revenue booster, and practically turned IBM around in the 90s.   However, one problem with hardware and software vendors reporting of service revenue is that they also embed break-fix, maintenance and infrastructure revenue streams in these line items.

The Wikibon blog mentioned StorageTek’s great service revenue business when Sun purchased them.  I recall that at the time, this was primarily driven by break-fix, maintenance and infrastructure revenues and not mainly from other non-product related revenues.

Certainly companies like EDS (now with HP), Perot Systems (now with Dell), and other pure service companies generate all their revenue from services not associated with selling HW or SW.  Which is probably why HP and Dell purchased them.

The challenge for analysts is to try to extract the more ongoing maintenance, break-fix and infrastructure revenues from other service activity in order to understand how to delineate portions of service revenue growth:

  • IBM seems to break out their GBS (consulting and application mgnt) from their GTS (outsourcing, infrastructure, and maint) revenues (see IBM’s 10k).  However extracting break-fix and maintenance revenues from the other GTS revenues is impossible outside IBM.
  • EMC has no breakdown whatsoever in their services revenue line item in their 10K.
  • HP similarly, has no breakdown for their service revenues in their 10K.

Some of this may be discussed in financial analyst calls, but I could locate nothing but the above in their annual reports/10Ks.

IBM and Dell to the rescue

So we are all left to wonder how much of reported services revenue is ongoing maintenance and infrastructure business versus other services business.  Certainly IBM, in reporting both GBS and GTS gives us some inkling of what this might be in their annual report: GBS is $18B and GTS is $38B. So that means maint and break-fix must be some portion of that GTS line item.

Perhaps we could use Dell as a proxy to determine break-fix, maintenance and infrastructure service revenues. Not sure where Wikibon got the reported service revenue % for Dell but their most recent 10K shows services are more like 19% of annual revenues.

Dell had a note in their “Results from operations” section that said Perot systems was 7% of this.  Which means previous services, primarily break-fix, maintenance and other infrastructure support revenues accounted for something like 12% (maybe this is what Wikibon is reporting).

Unclear how well Dell revenue percentages are representative of the rest of the IT industry but if we take their ~12% of revenues off the percentages reported by Wikibon then the new ranges are from 45% for IBM to 7% for Dell with an median around 14.5% for non-break fix, maintenance and infrastructure service revenues.

Why is this important?

Break-fix, maintenance revenues and most infrastructure revenues are entirely associated with product (HW or SW) sales, representing an annuity once original product sales close.  The remaining service revenues are special purpose contracts (which may last years), much of which are sold on a project basis representing non-recurring revenue streams.

—-

So the next time some company tells you their service revenues are up 25% YoY, ask them how much of this is due to break-fix and maintenance.  This may tell you whether their product footprint expansion or their service offerings success is driving service revenue growth.

Comments?

One platform to rule them all – Compellent&EqualLogic&Exanet from Dell

Compellent drive enclosure (c) 2010 Compellent (from Compellent.com)
Compellent drive enclosure (c) 2010 Compellent (from Compellent.com)

Dell and Compellent may be a great match because Compellent uses commodity hardware combined with specialized software to create their storage subsystem. If there’s any company out there that can take advantage of commodity hardware it’s probably Dell. (Of course Commodity hardware always loses in the end, but that’s another story).

Similarly, Dell’s EqualLogic iSCSI storage system uses commodity hardware to provide its iSCSI storage services.  It doesn’t take a big leap of imagination to have one storage system that combines the functionality of EqualLogic’s iSCSI and Compellent’s FC storage capabilities.  Of course there are others already doing this including Compellent themselves which have their own iSCSI support already built into their FC storage system.

Which way to integrate?

Does EqualLogic survive such a merger?  I think so.  It’s easy to imagine that Equal Logic may have the bigger market share today. If that’s so, the right thing might be  to merge Compellent FC functionality into EqualLogic.  If Compellent has the larger market, the correct approach is the opposite. The answer lies probably with a little of both.  It seems easiest to add iSCSI functionality to a FC storage system than the converse but the FC to iSCSI approach may be the optimum path for Dell, because of the popularity of their EqualLogic storage.

What about NAS?

The only thing missing from this storage system is NAS.  Of course the Compellent storage offers a NAS option through the use of a separate Windows Storage Server (WSS) front end.  Dell’s EqualLogic does the much the same to offer NAS protocols for their iSCSI system.  Neither of these are bad solutions but they are not a fully integrated NAS offering such as available from NetApp and others.

However, there is a little discussed part, the Dell-Exanet acquisition which happened earlier this year. Perhaps the right approach is to integrate Exanet with Compellent first and target this at the high end enterprise/HPC market place, keeping Equal Logic at the SMB end of the marketplace.  It’s been a while since I have heard about Exanet, and nothing since the acquisition earlier this year.  Does it make sense to backend a clustered NAS solution with FC storage – probably.

—-

Much of this seems doable to me, but it all depends on taking the right moves once the purchase is closed.   But if I look at where Dell is weakest (baring their OEM agreement with EMC), it’s in the highend storage space.  Compellent probably didn’t have much of a foot print there as possible due to their limited distribution and support channel.  A Dell acquisition could easily eliminate these problems and open up this space without having to do much other than start to marketing, selling and supporting Compellent.

In the end, a storage solution supporting clustered NAS, FC, and iSCSI that combined functionality equivalent to Exanet, Compellent and EqualLogic based on commodity hardware (ouch!) could make a formidable competitor to what’s out there today if done properly. Whether Dell could actually pull this off and in a timely manner even if they purchase Compellent, is another question.

Comments?

Latest ESRPv3 (Exchange 2010) results analysis for 1K-to-5Kmailboxes – chart of the month

(c) 2010 Silverton Consulting, Inc., All Rights Reserved
(c) 2010 Silverton Consulting, Inc., All Rights Reserved

The chart is from SCI’s October newsletter/performance dispatch on Exchange 2010 Solution Reviewed Program (ESRP v3.0) and shows the mailbox database access latencies for read, write and log write.  For this report we are covering solutions supporting from 1001 up to 5000 mailboxes (1K-to-5Kmbx), larger and (a few) smaller configurations have been covered in previous performance dispatches.  On latency charts like this – lower is better.

We like this chart because in our view this represents a reasonable measure of email user experience.  As users read and create new emails they are actually reading Exchange databases and writing database and logs.  Database and log latencies should show up as longer or shorter delays in these activities.  (Ok, not exactly true, email client and Exchange server IO aren’t the same thing.  But ultimately every email sent has to be written to an Exchange database and log sometime and every new email read-in has to come from an Exchange database as well).

A couple of caveats are in order for this chart.

  • Xiotech’s top run (#1) did not use database redundancy or DAGs (Database Availability Groups) in their ESRPv3 run. Their feeling is that this technology is fairly new and it will take some time before it’s widely adopted.
  • There is quite the mix of SAS (#2,3,6,7,9&10), FC (#1,5&8) and iSCSI (#4) connected storage in this mailbox range.  Some would say that SAS connected storage should have an advantage here but that’s not obvious from the rankings.
  • Vendors get to select the workload intensity for any ESRPv3/Jetstress run, e.g. the solutions shown here used between 0.15 IO/sec/mailbox (#9&10) and 0.36 IO/sec/mailbox (#1).  IO intensity is just one of the myriad of Jetstress tweakable parameters that make analyzing ESRP so challenging.  Normally this would only matter with database and log access counts but heavier workloads can also impact latencies as well.

Wide variance between read and write latencies

The other thing of interest in this chart is the interesting span between read latencies and write (database and log) latencies for the same solution. Take the #10 Dell PowerEdge system for example.  It showed a database read latency of ~18msec. but a database write latency of ~0.4msec.  Why?

It turns out this Dell system had only 6 disk drives (2TB/7200 RPM).  So few disk drives don’t seem adequate to support the read workload and as a result, show up poorly in database read latencies.  However, write activity can mostly be masked with cache until it fills up, forcing write delays.  With only 1100 mailboxes and 0.15 IOs/sec/mailbox, the write workload apparent fits in cache well enough to be destaged over time, without delaying ongoing write activity.  Similar results appear for the other Dell PowerEdge (#6) and the HP Smart Array (#7) which had 12-2TB/7200 RPM and 24-932GB/7200 RPM drives respectively.

On the other hand, Xiotech’s #1 position had 20-360GB/15Krpm drives and EMC’s Celerra #4 run had 15-400GB/10Krpm drives, both of which were able to sustain a more balanced performance across reads and writes (database and logs).  For Xiotech’s #5 run they used 40-500GB/10Krpm drives.

It seems there is a direct correlation between drive speed and read database latencies.  Most of the systems in the bottom half of this chart have 7200 RPM drives (except for #8, HP StorageWorks MSA) and the top 3 all had 15Krpm drives.  However, write latencies don’t seem to be as affected by drive speed and have more to do with the balance between workload, cache size and effective destaging.

The other thing that’s apparent from this chart is that SAS connected storage continues to be an effective solution for this range of Exchange configurations, following a trend first shown in ESRP v2 (Exchange 2007) results.  We reported on this in our  January ESRPv2 analysis dispatch for this year .

The full dispatch will be up on our website in a couple of weeks but if you are interested in seeing it sooner just sign up for our free newsletter (see upper right) or subscribe by email and we will send you the current issue with download instructions for this and other reports.

As mentioned previously ESRP/Jetstress results are difficult to compare/analyze and we continue to welcome any constructive suggestions on how to improve.

Enterprise data storage defined and why 3PAR?

More SNW hall servers and storage
More SNW hall servers and storage

Recent press reports about a bidding war for 3PAR bring into focus the expanding need for enterprise class data storage subsystems.  What exactly is enterprise storage?

Defining enterprise storage is frought with problems but I will take a shot.  Enterprise class data storage has:

  • Enhanced reliability, high availability and serviceability – meaning it hardly ever fails, it keeps operating (on redundant components) when it does fail, and repairing the storage when the rare failure occurs can be accomplished without disrupting ongoing storage services
  • Extreme data integrity – goes beyond just RAID storage, meaning that these systems lose data very infrequently, provide the latest data written to a location when read and will tell you when data cannot be accessed.
  • Automated I/O performance – meaning sophisticated caching algorithms that try to keep ahead of sequential I/O streams, buffer actively read data, and buffer write data in non-volatile cache before destaging to disk or other media.
  • Multiple types of storage – meaning the system supports SATA, SAS and/or FC disk drives and SSDs or Flash storage
  • PBs of storage – meaning behind one enterprise class storage (sub-)system one can support over 1PB of storage
  • Sophisticated functionality – meaning the system supports multiple forms of offsite replication, thin provisioning, storage tiering, point-in-time copies, data cloning, administration GUIs/CLIs, etc.
  • Compatibility with all enterprise O/Ss – meaning the storage has been tested and is on hardware compatibility lists for every major operating system in use by the enterprise today.

As for storage protocol, it seems best to leave this off the list.  I wanted to just add block storage, but enterprises today probably have as much if not more external file storage (CIFS or NFS) as they have block storage (FC or iSCSI).  And the proportion in file systems seems to be growing (see IDC report referenced below).

In addition, while I don’t like the non-determinism of iSCSI or file access protocols, this doesn’t seem to stop such storage from putting up pretty impressive performance numbers (see our performance dispatches).  Anything that can crack 100K I/O or file operations per second probably deserves to call themselves enterprise storage as long as they meet the other requirements.  So, maybe I should add high-performance storage to the list above.

Why the sudden interest in enterprise storage?

Enterprise storage has been around arguably since the 2nd half of last century (for mainframe systems) but lately has become even more interesting as applications deploy to the cloud and server virtualization (from VMware, Microsoft Hyper-V and others) takes over the data center.

Cloud storage and cloud computing services are lowering the entry points for storage and processing, enabling application deployments which were heretofore unaffordable.  These new cloud applications consume storage at increasing rates and don’t seem to be slowing down any time soon.  Arguably, some cloud storage is not enterprise storage but as service levels go up for these applications, providers must ultimately turn to enterprise storage.

In addition, server virtualization transforms the enterprise data center from a single application per server to easily 5 or more applications per physical server.  This trend is raising server utilization, driving more I/O, and requiring higher capacity.  Such “multi-application” storage almost always requires high availability, reliability and performance to work well, generating even more demand for enterprise data storage systems.

Despite all the demand, world wide external storage revenues dropped 12% last year according to IDC.  Now the economy had a lot to do with this decline but another factor reducing external storage revenue is the ongoing drop in the price of storage on a $/GB basis.  To this point, that same IDC report stated that external storage capacity increased 33% last year.

Why Dell & HP wants 3PAR storage?

Margins on enterprise storage are good, some would say very good.  While raw disk storage can be had at under $0.50/GB, enterprise class storage is often 10 or more times that price.  Now that has to cover redundant hardware, software/firmware engineering and other characteristics, but this still leaves pretty good margins.

In my mind, Dell would see enterprise storage as a natural extension of their current enterprise server business.  They already sell and support these customers, including enterprise class storage just adds another product to the mix.  Developing enterprise storage from scratch is probably a 4-7 year journey with the right people, buying 3PAR puts them in the market today with a competitive product.

HP is already in the enterprise storage market today, with their XP and EVA storage subsystems.  However, having their own 3PAR enterprise class storage may get them better margins than their current XP storage OEMed from HDS.  But I think Chuck Hollis’s post on HP’s counter bid for 3PAR may have revealed another side to this discussion – sometime M&A is as much about constraining your competition as it is about adding new capabilities to a company.

——

What do you think?

Primary storage compression can work

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

Since IBM’s announced their intent to purchase StorWize there has been much discussion on whether primary storage data compression can be made to work.  As far as I know StorWize only offered primary storage compression for file data but there is nothing that prohibits doing something similar for block storage as long as you have some control over how blocks are laid down on disk.

Although secondary block  data compression has been around for years in enterprise tape and more recently with some deduplication appliances, primary storage compression pre-dates secondary storage compression.  STK delivered primary storage data compression with Iceberg in the early 90’s but it wasn’t until a couple of years later that they introduced compression on tape.

In both primary and secondary storage, data compression works to reduce the space needed to store data.  Of course, not all data compresses well, most notably image data (as it’s already compressed) but compression ratios of 2:1 were common for primary storage of that time and are normal for today’s secondary storage.  I see no reason why such ratios couldn’t be achieved for current primary storage block data.

Implementing primary block storage data compression

There is significant interest in implementing deduplication for primary storage as NetApp has done but supporting data compression is not much harder.  I believe much of the effort to deduplicate primary storage lies in creating a method to address partial blocks out of order, which I would call data block virtual addressing which requires some sort of storage pool.  The remaining effort to deduplicate data involves implementing the chosen (dedupe) algorithm, indexing/hashing, and other administrative activities.  These later activities aren’t readily transferable to data compression but the virtual addressing and space pooling should be usable by data compression.

Furthermore, block storage thin provisioning requires some sort of virtual addressing as does automated storage tiering.  So in my view, once you have implemented some of these advanced capabilities, implementing data compression is not that big a deal.

The one question that remains is does one implement compression with hardware or software (see Better storage through hardware for more). Considering that most deduplication is done via software today it seems that data compression in software should be doable.  The compression phase could run in the background sometime after the data has been stored.  Real time decompression using software might take some work, but would cost considerably less than any hardware solution.  Although the intensive bit fiddling required to perform data compression/decompression may argue for some sort of hardware assist.

Data compression complements deduplication

The problem with deduplication is that it needs duplicate data.  This is why it works so well for secondary storage (backing up the same data over and over) and for VDI/VMware primary storage (with duplicated O/S data).

But data compression is an orthogonal or complementary technique which uses the inherent redundancy in information to reduce storage requirements.  For instance, something like LZ compression takes advantage of the fact that in text some letters occur more often than others (see letter frequency). For instance, in English, ‘e’, ‘t’, ‘a’, ‘o’, ‘i’, and ‘n, represent over 50% of the characters in most text documents.  By using shorter bit combinations to encode these letters one can reduce the bit-length of any (English) text string substantially.  Another example is run length encoding which takes any repeated character and substitutes a trigger character, the character itself, and a count of the number of repetitions for the repeated string.

Moreover, the nice thing about data compression is that all these techniques can be readily combined to generate even better compression rates.  And of course compression could be applied after deduplication to reduce storage footprint even more.

Why would any vendor compress data?

For a couple of reasons:

  • Compression not only reduces storage footprint but with hardware assist it can also increase storage throughput. For example, if 10GB of data compresses down to 5GB, it should take ~1/2 the time to read.
  • Compression reduces the time it would take time to clone, mirror or replicate.
  • Compression increases the amount of data that could be stored which should incentivise them to pay more for your storage.

In contrast, with data compression vendors might may sell less storage.  But the advantages of enterprise storage is in the advanced functionality/features and higher reliability/availability/performance that are available.  I see data compression as just another advantages to enterprise class storage and as a feature, the user could enable or disable it and see how well it works for there data.

What do you think?