Hardware vs. software innovation – round 4

We, the industry and I, have had a long running debate on whether hardware innovation still makes sense anymore (see my Hardware vs. software innovation – rounds 1, 2, & 3 posts).

The news within the last week or so is that Dell-EMC cancelled their multi-million$, DSSD project, which was a new hardware innovation intensive, Tier 0 flash storage solution, offering 10 million of IO/sec at 100µsec response times to a rack of servers.

DSSD required specialized hardware and software in the client or host server, specialized cabling between the client and the DSSD storage device and specialized hardware and flash storage in the storage device.

What ultimately did DSSD in, was the emergence of NVMe protocols, NVMe SSDs and RoCE (RDMA over Converged Ethernet) NICs.

Last weeks post on Excelero (see my 4.5M IO/sec@227µsec … post) was just one example of what can be done with such “commodity” hardware. We just finished a GreyBeardsOnStorage podcast (GreyBeards podcast with Zivan Ori, CEO & Co-founder, E8 storage) with E8 Storage which is yet another approach to using NVMe-RoCE “commodity” hardware and providing amazing performance.

Both Excelero and E8 Storage offer over 4 million IO/sec with ~120 to ~230µsec response times to multiple racks of servers. All this with off the shelf, commodity hardware and lots of software magic.

Lessons for future hardware innovation

What can be learned from the DSSD to NVMe(SSDs & protocol)-RoCE technological transition for future hardware innovation:

  1. Closely track all commodity hardware innovations, especially ones that offer similar functionality and/or performance to what you are doing with your hardware.
  2. Intensely focus any specialized hardware innovation to a small subset of functionality that gives you the most bang, most benefits at minimum cost and avoid unnecessary changes to other hardware.
  3. Speedup hardware design-validation-prototype-production cycle as much as possible to get your solution to the market faster and try to outrun and get ahead of commodity hardware innovation for as long as possible.
  4. When (and not if) commodity hardware innovation emerges that provides  similar functionality/performance, abandon your hardware approach as quick as possible and adopt commodity hardware.

Of all the above, I believe the main problem is hardware innovation cycle times. Yes, hardware innovation costs too much (not discussed above) but I believe that these costs are a concern only if the product doesn’t succeed in the market.

When a storage (or any systems) company can startup and in 18-24 months produce a competitive product with only software development and aggressive hardware sourcing/validation/testing, having specialized hardware innovation that takes 18 months to start and another 1-2 years to get to GA ready is way too long.

What’s the solution?

I think FPGA’s have to be a part of any solution to making hardware innovation faster. With FPGA’s hardware innovation can occur in days weeks rather than months to years. Yes ASICs cost much less but cycle time is THE problem from my perspective.

I’d like to think that ASIC development cycle times of design, validation, prototype and production could also be reduced. But I don’t see how. Maybe AI can help to reduce time for design-validation. But independent FABs can only speed the prototype and production phases for new ASICs, so much.

ASIC failures also happen on a regular basis. There’s got to be a way to more quickly fix ASIC and other hardware errors. Yes some hardware fixes can be done in software but occasionally the fix requires hardware changes. A quicker hardware fix approach should help.

Finally, there must be an expectation that commodity hardware will catch up eventually, especially if the market is large enough. So an eventual changeover to commodity hardware should be baked in, from the start.

~~~~

In the end, project failures like this happen. Hardware innovation needs to learn from them and move on. I commend Dell-EMC for making the hard decision to kill the project.

There will be a next time for specialized hardware innovation and it will be better. There are just too many problems that remain in the storage (and systems) industry and a select few of these can only be solved with specialized hardware.

Comments?

Picture credit(s): Gravestones by Sherry NelsonMotherboard 1 by Gareth Palidwor; Copy of a DSSD slide photo taken from EMC presentation by Author (c) Dell-EMC

4.5M IO/sec@227µsec 4KB Read on 100GBE with 24 NVMe cards #SFD12

At Storage Field Day 12 (SFD12) this week we talked with Excelero, which is a startup out of Israel. They support a software defined block storage for Linux.

Excelero depends on NVMe SSDs in servers (hyper converged or as a storage system), 100GBE and RDMA NICs. (At the time I wrote this post, videos from the presentation were not available, but the TFD team assures me they will be up on their website soon).

I know, yet another software defined storage startup.

Well yesterday they demoed a single storage system that generated 2.5 M IO/sec random 4KB random writes or 4.5 M IO/Sec random 4KB reads. I didn’t record the random write average response time but it was less than 350µsec and the random read average response time was 227µsec. They only did these 30 second test runs a couple of times, but the IO performance was staggering.

But they used lots of hardware, right?

No. The target storage system used during their demo consisted of:

  • 1-Supermicro 2028U-TN24RT+, a 2U dual socket server with up to 24 NVMe 2.5″ drive slots;
  • 2-2 x 100Gbs Mellanox ConnectX-5 100Gbs Ethernet (R[DMA]-NICs); and
  • 24-Intel 2.5″ 400GB NVMe SSDs.

They also had a Dell Z9100-ON Switch  supporting 32 X 100Gbs QSFP28 ports and I think they were using 4 hosts but all this was not part of the storage target system.

I don’t recall the CPU processor used on the target but it was a relatively lowend, cheap ($300 or so) dual core, Intel standard CPU. I think they said the total target hardware cost $13K or so.

I priced out an equivalent system. 24 400GB 2.5″ NVMe Intel 750 SSDs would cost around $7.8K (Newegg); the 2 Mellanox ConnectX-5 cards $4K (Neutron USA); and the SuperMicro plus an Intel Cpu around $1.5K. So the total system is close to the ~$13K.

But it burned out the target CPU, didn’t it?

During the 4.5M IO/sec random read benchmark, the storage target CPU was at 0.3% busy and the highest consuming process on the target CPU was the Linux “Top” command used to display the PS status.

Excelero claims that the storage target system consumes absolutely no CPU processing to service an 4K read or write IO request. All of IO processing is done by hardware (the R(DMA)-NICs, the NVMe drives and PCIe bus) which bypasses the storage target CPU altogether.

We didn’t look at the host cpu utilization but driving 4.5M IO/sec would take a high level of CPU power even if their client software did most of this via RDMA messaging magic.

How is this possible?

Their client software running in the Linux host is roughly equivalent to an iSCSI initiator but talks a special RDMA protocol (patent pending by Excelero, RDDA protocol) that adds an IO request to the NVMe device submission queue and then rings the doorbell on the target system device and the SSD then takes it off the queue and executes it. In addition to the submission queue IO request they preprogram the PCIe MSI interrupt request message to somehow program (?) the target system R-NIC to send the read data/write status data back to the client host.

So there’s really no target CPU processing for any NVMe message handling or interrupt processing, it’s all done by the client SW and is handled between the NVMe drive and the target and client R-NICs.

The result is that the data is sent back to the requesting host automatically from the drive to the target R-NIC over the target’s PCIe bus and then from the target system to the client system via RDMA across 100GBE and the R-NICS and then from the client R-NIC to the client IO memory data buffer over the client’s PCIe bus.

Writes are a bit simpler as the 4KB write data can be encapsulated into the submission queue command for the write operation that’s sent to the NVMe device and the write IO status is relatively small amount of data that needs to be sent back to the client.

NVMe optimized for 4KB IO

Of course the NVMe protocol is set up to transfer up to 4KB of data with a (write command) submission queue element. And the PCIe MSI interrupt return message can be programmed to (I think) write a command in the R-NIC to cause the data transfer back for a read command directly into the client’s memory using RDMA with no CPU activity whatsoever in either operation. As long as your IO request is less than 4KB, this all works fine.

There is some minor CPU processing on the target to configure a LUN and set up the client to target connection. They essentially only support replicated RAID 10 protection across the NVMe SSDs.

They also showed another demo which used the same drive both across the 100Gbs Ethernet network and in local mode or direct as a local NVMe storage. The response times shown for both local and remote were within  5µsec of each other. This means that the overhead for going over the Ethernet link rather than going local cost you an additional 5µsec of response time.

Disaggregated vs. aggregated configuration

In addition to their standalone (disaggregated) storage target solution they also showed an (aggregated) Linux based, hyper converged client-target configuration with a smaller number of NVMe drives in them. This could be used in configurations where VMs operated and both client and target Excelero software was running on the same hardware.

Simply amazing

The product has no advanced data services. no high availability, snapshots, erasure coding, dedupe, compression replication, thin provisioning, etc. advanced data services are all lacking. But if I can clone a LUN at lets say 2.5M IO/sec I can get by with no snapshotting. And with hardware that’s this cheap I’m not sure I care about thin provisioning, dedupe and compression.  Remote site replication is never going to happen at these speeds. Ok HA is an important consideration but I think they can make that happen and they do support RAID 10 (data mirroring) so data mirroring is there for an NVMe device failure.

But if you want 4.5M 4K random reads or 2.5M 4K random writes on <$15K of hardware and happen to be running Linux, I think they have a solution for you. They showed some volume provisioning software but I was too overwhelmed trying to make sense of their performance to notice.

Yes it really screams for 4KB IO. But that covers a lot of IO activity these days. And if you can do Millions of them a second splitting up bigger IOs into 4K should not be a problem.

As far as I could tell they are selling Excelero software as a standalone product and offering it to OEMs. They already have a few customers using Excelero’s standalone software and will be announcing  OEMs soon.

I really want one for my Mac office environment, although what I’d do with a millions of IO/sec is another question.

Comments?

Intel’s Optane (3D Xpoint) SSD specs in the wild

Read an article the other day in Ars Technica (Specs for 1st Intel 3DX SSD…) about a preview of the Intel Octane specs for their 375GB 3D Xpoint (3DX) flash card. The device is NVMe compliant, PCIe Gen3 add in card, that’s in a half height, half length, low profile form factor.

Intel’s Optane SSD vs. the competition

A couple of items from the Intel Optane spec sheet of interest to me as a storage guru:

  • 30 Drive writes per day/12.3 PBW (written) – 3DX, at launch, had advertised that it would have 1000 times the endurance of (2D-MLC?) NAND. Current flash cards (see Samsung SSD PRO NVMe 256GB Flash card specs) offer about 200TBW (for 256GB card) or 400TBW (for 512GB card). The Samsung PRO is based on 3D (V-)NAND, so its endurance is much better than  2D-MLC at these densities. That being said, the Octane drive is still ~40X the write endurance of the PRO 950. Not quite 1000 but certainly significantly better.
  • Sequential (bandwidth) performance (R/W) of 2400/2000 MB/sec – 3DX advertised 1000 times the performance of (2D-MLC,  non-NVMe?) NAND. Current 3D (V-)NAND cards (see Samsung SSD PRO above) above offers (R/W) 2200/900 MB/sec for an NVMe device. The Optane’s read bandwidth is a slight improvement but the write bandwidth is a 2.2X improvement over current competitive devices.
  • Random 4KB IOPs performance (R/W) of 550K/500K – Similar to the previous bulleted item, 3DX advertised 1000 times the performance of (2D-MLC,  non-NVMe?) NAND. Current 3D (V-)NAND cards like the Samsung SSD PRO offer Random 4KB IOPs performance  (R/W) of 270K/85K IOPS (@4 threads). Optane’s read random 4KB IOPs performance is 2X the PRO 950 but its write performance is ~5.9X better.
  • IO latency of <10 µsec. – 3DX advertised 10X better latency than the current (2D-MLC, non-NVMe) flash drives. According to storage review (Samsung 950 Pro M.2), the Samsung PRO 950 had a latency of ~22 µsec. Optane has at least 2X better latency than the current competition.
  • Density 375GB/HH-HL-LP – 3DX advertised 1000X the density of (then current DRAM). Today Micron offers a 4GiB DDR4/288 pin DIMM which is probably 1/2 the size of the HH flash drive. So maybe in the same space this could be 8GiB. This says that the Optane is about 100X denser than today’s DRAM.

Please note, when 3DX was launched, ~2 years ago, the then current NAND technology was 2D-MLC and NVMe was just a dream. So comparing launch claims against today’s current 3D-NAND, NVMe drives is not a fair comparison.

Nevertheless, the Optane SSD performs considerably better than current competitive NVMe drives and has significantly better endurance than current 3D (V-)NAND flash drives. All of which is a great step in the right direction.

What about DRAM replacement?

At launch, 3DX was also touted as a higher density, potential replacement for DRAM. But so far we haven’t seen any specs for what 3DX NVM looks like on a memory bus. It has much better density than DRAM, but we would need to see 3DX memory access times under 50ns to have a future as a DRAM replacement. Optane’s NVMe SSD at 10 µsec. is about 200X too slow, but then again it’s not a memory device configuration nor is it attached to a memory bus.

Comments?

Photo Credit(s):  Intel Optane Spec sheet from Ars Technica Article,  DDR4 DRAM from Wikimedia user:Dsimic

Microsoft ESRP database access latencies – chart of the month

sciesrp1030-001
The above chart was included in last month’s SCI e-Newsletter and depicts recent Microsoft Exchange (2013) Solution Reviewed Program (ESRP) results for database access latencies. Storage systems new to this 5000 mailbox and over category include, Oracle, Pure and Tegile. As all of these systems are all flash arrays we are starting to see significant reductions in database access latencies and the #4 system (Nimble Storage) was a hybrid (disk and flash) array.

As you recall, ESRP reports on three database access latencies: read database, write database and log write. All three are shown above but we sort and rank this list based on database read activity alone.

Hard to see above, but reading the ESRP reports, one finds that the top 3 systems had 1.04, 1.06 and 1.07 milliseconds. average read database latencies. So the separation between the top 3 is less than 40 microseconds.

The top 3 database write access latencies were 1.75, 1.62 and 3.07 milliseconds, respectively. So if we were ranking the above on write response times Pure would have come in #1.

The top 3 log write access latencies were 0.67, 0.41 and 0.82 milliseconds and once again if we were ranking based on log response times Pure would be #1.

It’s unclear whether Exchange customers would want to deploy AFAs for their database and log files but these three ESRP reports and Nimble’s show that there should be no problem with the performance of AFAs in these environments.

What about data reduction?

Unclear to me is how much data reduction technologies played in the AFA and hybrid solution ESRP performance. Data reduction advantages would most likely show up in database IOPS counts more so than response times but if present, may still reduce access latencies, as there would be potentially less data to be transferred to-from the backend of the storage system into-out of storage system cache.

ESRP reports do not officially report on a vendor’s data reduction effectiveness, so we are left with whatever the vendor decides to say.

In that respect, Pure FlashArray//m20 indicated in their ESRP report that their “data reduction is significantly higher” than what they see normally (4:1) because Jetstress (ESRP benchmark program) generates lots of duplicated data.

I couldn’t find anything that Tegile T3800 or Nimble Storage said similar to the above, indicating how well their data reduction technologies worked in Jetstress as compared to normal. However, they did make a reference to their compression effectiveness in database size but I have found this number to be somewhat less effective as it historically showed the amount of over provisioning used by disk-only systems and for AFA’s and hybrid – storage, it’s unclear how much is data reduction effectiveness vs. over provisioning.

For example, Pure, Tegile and Nimble also reported a “database capacity utilization” of 4.2%60% and 74.8% respectively. And Nimble did report that over their entire customer base, Exchange data has on average, a 61.2% capacity savings.

So you tell me what was the effective data reduction for their Pure’s, Tegile’s and Nimble’s respective Jetstress runs? From my perspective Pure’s report of 4.2% looks about right (that says that actual database data fit in 4.2% of SSD storage for a ~23.8:1 reduction effectiveness for Jetstress/ESRP data. I find it harder to believe what Tegile and Nimble have indicated as it doesn’t seem to be as believable as they would imply a 1.7:1 and 1.33:1 reduction effectiveness for Jetstress/ESRP data.

Oracle FS1-2 doesn’t seem to have any data reduction capabilities and reported a 100% storage capacity used by Exchange database.

So that’s it, Jetstress uses “significantly reducible” data for some AFAs systems. But in the field the advantage of data reduction techniques are much less so.

I think it’s time that ESRP stopped using significantly reducible data in their Jetstress program and tried to more closely mimic real world data.

Want more?

The October 2016 and our other ESRP reports have much more information on Microsoft Exchange performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is also included in SCI’s SAN-NAS Buying Guidealso available from our website. And if your interested in file system performance please consider purchasing our NAS Buying Guide also available on our website.

~~~~

The complete ESRP performance report went out in SCI’s October 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

As always, we welcome any suggestions or comments on how to improve our ESRP  performance reports or any of our other storage performance analyses.

 

QoM1610: Will NVMe over Fabric GA in enterprise AFA by Oct’2017

NVMeNVMe over fabric (NVMeoF) was a hot topic at Flash Memory Summit last August. Facebook and others were showing off their JBOF (see my Facebook moving to JBOF post) but there were plenty of other NVMeoF offerings at the show.

NVMeoF hardware availability

When Brocade announced their Gen6 Switches they made a point of saying that both their Gen5 and Gen6 switches currently support NVMeoF protocols. In addition to Brocade’s support, in Dec 2015 Qlogic announced support for NVMeoF for select HBAs. Also, as of  July 2016, Emulex announced support for NVMeoF in their HBAs.

From an Ethernet perspective, Qlogic has a NVMe Direct NIC which supports NVMe protocol offload for iSCSI. But even without NVMe Direct, Ethernet 40GbE & 100GbE with  iWARP, RoCEv1-v2, iSCSI SER, or iSCSI RDMA all could readily support NVMeoF on Ethernet. The nice thing about NVMeoF for Ethernet is not only do you get support for iSCSI & FCoE, but CIFS/SMB and NFS as well.

InfiniBand and Omni-Path Architecture already support native RDMA, so they should already support NVMeoF.

So hardware/firmware is already available for any enterprise AFA customer to want NVMeoF for their data center storage.

NVMeoF Software

Intel claims that ~90% of the software driver functionality of NVMe is the same for NVMeoF. The primary differences between the two seem to be the NVMeoY discovery and queueing mechanisms.

There are two fabric methods that can be used to implement NVMeoF data and command transfers: capsule mode where NVMe commands and data are encapsulated in normal fabric packets or fabric dependent mode where drivers make use of native fabric memory transfer mechanisms (RDMA, …) to transfer commands and data.

12679485_245179519150700_14553389_nA (Linux) host driver for NVMeoF is currently available from Seagate. And as a result, support for NVMeoF for Linux is currently under development, and  not far from release in the next Kernel (I think). (Mellanox has a tutorial on how to compile a Linux kernel with NVMeoF driver support).

With Linux coming out, Microsoft Windows and VMware can’t be far behind. However, I could find nothing online, aside from base NVMe support, for either platform.

NVMeoF target support is another matter but with NICs/HBAs & switch hardware/firmware and drivers presently available, proprietary storage system target drivers are just a matter of time.

Boot support is a major concern. I could find no information on BIOS support for booting off of a NVMeoF AFA. Arguably, one may not need boot support for NVMeoF AFAs as they are probably not a viable target for storing App code or OS software.

From what I could tell, normal fabric multi-pathing support should work fine with NVMeoF. This should allow for HA NVMeoF storage, a critical requirement for enterprise AFA storage systems these days.

NVMeoF advantages/disadvantages

Chelsio and others have shown that NVMeoF adds ~8μsec of additional overhead beyond native NVMe SSDs, which if true would warrant implementation on all NVMe AFAs. This may or may not impact max IOPS depending on scale-ability of NVMeoF.

For instance, servers (PCIe bus hardware) typically limit the number of private NVMe SSDs to 255 or less. With an NVMeoF, one could potentially have 1000s of shared NVMe SSDs accessible to a single server. With this scale, one could have a single server attached to a scale-out NVMeoF AFA (cluster) that could supply ~4X the IOPS that a single server could perform using private NVMe storage.

Base level NVMe SSD support and protocol stacks are starting to be available for most flash vendors and operating systems such as, Linux, FreeBSD, VMware, Windows, and Solaris. If Intel’s claim of 90% common software between NVMe and NVMeoF drivers is true, then it should be a relatively easy development project to provide host NVMeoF drivers.

The need for special Ethernet hardware that supports RDMA may delay some storage vendors from implementing NVMeoF AFAs quickly. The lack of BIOS boot support may be a minor irritant in comparison.

NVMeoF forecast

AFA storage systems, as far as I can tell, are all about selling high IOPS and very-low latency IOs. It would seem that NVMeoF would offer early adopter AFA storage vendors a significant performance advantage over slower paced competition.

In previous QoM/QoW posts we have established that there are about 13 new enterprise storage systems that come out each year. Probably 80% of these will be AFA, given the current market environment.

Of the 10.4 AFA systems coming out over the next year, ~20% of these systems pride themselves on being the lowest latency solutions in the market, and thus command high margins. One would think these systems would be the first to adopt NVMeoF. But, most of these systems have their own, proprietary flash modules and do not use standard (NVMe) SSDs and can use their own proprietary interface to their proprietary flash storage. This will delay any implementation for them until they can convert their flash storage to NVMe which may take some time.

On the other hand, most (70%) of the other AFA systems, that currently use SAS/SATA SSDs, could boost their IOP counts and drastically reduce their IO  response times, by implementing NVMe SSDs and NVMeoF. But converting SAS/SATA backends to NVMe will take time and effort.

But, there are a select few (~10%) of AFA systems, that already use NVMe SSDs in their AFAs, and for these few, they would seem to have a fast track towards implementing NVMeoF. The fact that NVMeoF is supported over all fabrics and all storage interface protocols make it even easier.

Moreover, NVMeoF has been under discussion since the summer of 2015, which tells me that astute AFA vendors have already had 18+ months to develop it. With NVMeoF host drivers & hardware available since Dec. 2015, means hardware and software exist to test and validate against.

I believe that NVMeoF will be GA’d within the next 12 months by at least one enterprise AFA system. So my QoM1610 forecast for NVMeoF is YES, with a 0.83 probability.

Comments?

 

 

 

SPC-1 IOPS performance per GB-NAND – chart of the month

Bar chart depicting IOPS/GB-NAND, #1 is Datacore Parallel Server with ~266 IOPS/GB-NAND,
(c) 2016 Silverton Consulting, All Rights Reserved

The above is an updated chart from last months SCI newsletter StorInt™ SPC Performance Report depicting the top 10 SPC-1 submissions IOPS™ per GB-NAND. We have been searching for a while now how to depict storage system effectiveness when using SSD or other flash storage. We have used IOPS/SSD in the past but IOPS/GB-NAND looks better.

Calculating IOPS/GB-NAND

SPC-1 does not report this metric but it can be calculated by dividing IOPS by NAND storage capacity. One can find out NAND storage capacity by looking over SPC-1 full disclosure reports (FDR), totaling up the NAND storage in the configuration in all the SSDs and flash devices. This is total NAND capacity, not Total ASU (used storage) Capacity. GB-NAND reflects just what’s indicated for SSD/flash device capacity in the configuration section. This is not necessarily the device’s physical NAND capacity when over provisioned, but at least it’s available in the FDR.

DataCore Parallel Server IOPS/GB-NAND explained

The DataCore Parallel Server generated over 5M IOPS (IO’s/second) under an SPC-1 (OLTP-like) workload. And with their 54-480GB SSDs, totaling ~25.9TB of NAND capacity, it gives them just under 200 IOPS/GB-NAND. The chart in the original report was incorrect.  There we used 36-480GB SSDs or ~17.3TB of NAND to compute IOPS/GB-NAND, which gave them just under 300 IOPS/GB-NAND in the report, which was incorrect. (The full report has been since corrected and is available for re-download for subscribers to our newsletter).

The 480GB (Samsung SM863 MZ-7KM480E)SSDs were all SATA attached. Samsung lists these SSDs as V-NAND, MLC drives, rated at 97K random Reads and 26K random writes. At over 5M IOPS, it should be running close to 100% of the SSDs rated performance. However, DataCore’s Parallel Server included 2 controllers with a total of 3TB of DRAM cache,  which was then SAS connected to 4 DELL MD1220 storage arrays, each with 512GB of DRAM cache, so their total configuration had about 5TB of DRAM in it, most of which would have been used as a IO cache.

The SPC-1 submission only used 11.8TB (Total ASU capacity) of storage. All the DRAM cache help to explain how they attained 5M IOPS. Having a multi-tiered cache like DataCore-MD1220 configuration, doesn’t insure that all the cache is effectively used but even without cache tiering logic, there might not be much of an overlap between the MD1220 and Parallel Server caches. It would be more interesting to see how busy the SSDs were during this SPC-1 run.

How random the SPC-1 workload is, is subject to much speculation in the industry. Suffice it to say it’s not 100% random, but what is. Non-random OLTP workloads would tend to favor larger caches.

SPC is coming out with a new version of their benchmark with supplementary information which may shed more light on device busyness.

All SPC-1 benchmark submissions are available at storageperformance.org.

Want more?

The August 2016 and our other SPC Performance reports have much more information on SPC-1 and SPC-2 performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is included in SCI’s SAN-NAS Buying Guidealso available from our website .

~~~~

The complete SPC performance report went out in SCI’s August 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

 

NetApp updates their StorageGRID Webscale solution

grid001NetApp announced a new version of their object storage solution, the StorageGRID WebScale 10.3.

At a former employer, I first talked with StorageGRID (Bycast at the time) a decade or so ago. At that time, they were focused on medical and healthcare verticals and had a RAIN (redundant array of independent nodes) storage solution.  It has come a long way.

StorageGRID Business is booming

On the call, NetApp announced they sold 50PB of StorageGRID in FY’16 with 20PB of that in the last quarter and also reported 270% Y/Y revenue growth, which means they are starting to gain some traction in the marketplace. Are we seeing an acceleration of object storage adoption?

As you may recall, StorageGRID comes in a software only solution that runs on just about any white box server with DAS or as two hardware appliances: the SG5612 (12 drive); and the SG5660 (60 drive) nodes. You can mix and match any appliance with any white box software only solution, they don’t have to have the same capacity or performance. But all nodes need network and controller/admin node(s) access.

StorageGRID past

grid002Somewhere during Bycast’s journey they developed support for tape archives and information lifecycle management (ILM) for objects. The previous generation, StorageGrid 10.2 had a number of features, including:

  • S3 cloud archive support that allowed objects to be migrated to AWS S3 as they were no longer actively accessed
  • NAS bridge support that allowed CIFS/SMB or NFS access to StorageGRID objects, which could also be read as S3 objects for easier migration to/from object storage;
  • Hierarchical erasure coding option that was optimized for efficiently storing large objects;
  • Node level erasure coding support that can be used to rebuild data for node drive failures, without having to go outside the node data retrieval;
  • Object byte-granular range read support that allowed users to read an object at any byte offset without requiring rebuild;
  • Support for OpenStack Swift API that made StorageGRID objects natively available to any OpenStack service; and
  • Software support for running as Docker containers or as a VM under VMware ESX, or OpenStack KVM that allowed StorageGRID software to run just about anywhere.

StorageGRID present and future

grid003But customers complained StorageGRID was too complex to install and update which required too much hand holding by NetApp professional services. StorageGRID Webscale 10.3 was targeted to address these deficiencies. Some of the features in StorageGrid 10.3, include:

  • Radically simplified, more modern UI, new dashboard and policy wizard/editor, so that it’s a lot easier to manage the StorageGRID. All features of the UI are also available via RESTfull API access and the UI is the same for white box, software only implementations as well as appliance configurations.
  • Simplified automated installation scripts, so that installations that used to take multiple steps, separate software installs and required professional services support, now use a full-solution software stack install, take only minutes and can be done by the customers alone;
  • S3 object versioning support, so that objects can have multiple versions, limited via the UI, if needed, but provide a snapshot-like capability for S3 data that protects against object accidental deletion.
  • grid004ILM policy change predictions/modeling, so that admins can now see how changes to ILM policies will impact StorageGRID.
  • Even more flexibility in DAS storage, so that future StorageGRID configurations can support 10TB drives and 6TB FIPS-140 drive encryption support, which adds to the current drive capacity and data security options already available in StorageGRID.

To top it all off, StorageGRID 10.3 improves performance for both small (30KB) and large (300MB) object get/puts.

  • Small S3 Load Data Router (LDR, 1-thread) object performance has improved ~4X for both PUTs and GETs; and
  • Large S3 LDR (1-thread) object performance has improved ~2X for PUTs and ~4X for GETs.

Object storage market heating up

grid005Apparently, service providers are adopting object storage to  provide competition to AWS, Azure and Google cloud storage for backup and storage archives as well as for DR as a service. Also, many media and other customers managing massive data repositories are turning to object storage to support their multi-site, very large file libraries.  And as more solution vendors support S3 object protocols for data access and archive, something like StorageGRID can become their onsite-offsite storage alternative.

And Amazon, Azure and Google are starting to realize that most enterprise customers are not going to leap to the cloud for everything they do. So, some sort of hybrid solution is needed for the long term. Having an on premises and off premises object storage solution that can also archive/migrate data to the cloud is a great hybrid alternative that takes enterprises one step closer to the cloud.

Comments?

Microsoft ESRP database transfer performance by storage interface – chart of the month

SCIESRP160728-001The above chart was included in our e-newsletter Microsoft Exchange Solution Reviewed Program (ESRP) performance report, that went out at the end of July. ESRP reports on a number of metrics but one of the more popular is total (reads + writes) Exchange database transfers per second.

Categories reported on in ESRP include: over 5,000 mailboxes; 1001 to 5000 mailboxes; and 1000 and under mailboxes. For the above chart we created our own category using all submissions up to 10,000 mailboxes. Then we grouped the data using the storage  interface between the host Exchange servers and the storage, and only included ESRP reports that had 10 KRPM disk drives.
Continue reading “Microsoft ESRP database transfer performance by storage interface – chart of the month”