
At Storage Field Day 18 (SFD18) a couple of weeks ago, we heard from a new company, StorPool, that provides ultra fast software defined storage for MSPs and other cloud providers. You can watch the videos of their sessions here.

Didn’t know what to make of them at first, but when they started demoing their performance, we all woke up. They ran an all read and mixed read-write IO workload, that almost blew away any other proprietary/non-proprietary storage I’ve seen before.
[Updated 12Mar2019] What they were trying to achieve was to match the performance of an Windows Server 2019 Hyper V benchmark which hit 13.8 M IOPS using 12 nodes of 384GB DRAM, 1.5TB Optane DC persistent memory, 32TB (4X8TB) NVMe SSDs and Mellanox 25Gbps RDMA ethernet, with each VM running on the server that had the VHDX file stored.
Their demo showed 70:30 R:W random 4KB mixed workload and achieved 1M IOPS with a read latency of 140 microsec. and write latency of 100 microsec. (end to end at the VM level). [Updated 12Mar2019] They were able to match the performance of a published benchmark without the 1.5TB Optane memory, without the 25Gbps RDMA Ethernet and without having the VMs and its storage running on same nodes. They were able to show this performance running StorPool, KVM and CentOS 7 across 12 nodes running both VMs and storage services.

They also showed information on a pgbench benchmark, which I was not familiar with. The chart had response times on the horizontal axis and TPS performance on the vertical axis.

What’s even more amazing is that even with the great performance they still offer reasonable data services such as CoW snapshots, asynchronous replication (with changed block tracking), thin provisioning, end to end data integrity, and iSCSI support.
Their target market is mostly MSPs and large customers moving to the private cloud configurations. They mentioned deep support for OpenNebula, [updated 12Mar2019] OpenStack, OnApp and Kubernetes which means each virtual disk is a volume/LUN. They support VMware and Windows Server/Hyper-V through iSCSI.
~~~~
The fact that they have a proprietary protocol is not that great but if they can generate the IOPS and response times they showed here with snapshot, thin provisioning and asynch replication, I’m ok with it. [Updated 12Mar2019]The fact that they were able to match the performance of the more expensive system with standard Ethernet, no Optane memory and all VMs running remote made a significant impression on me.
Want to learn more, check out these other discussions on StorPool (and other SFD18 vendors):
SFD18 – as intense as it gets by Max Mortillaro (@DarkAvenger), and
Podcast #3 review the SFD18 presenters by Chris M. Evans (@ChrisMEvans) and Matt Leib (@MBLeib).
[Updated 12Mar2019 Boyan Krosnov sent me an email indicating that the post had made some mistakes in the post which were corrected via updates above. Editors ]
Was at
During their development activity they tested various configurations. At the start of their development they used a Windows Server with their NVMeoF target device driver. With this configuration and on a bare metal server, they found that they could max out the Optane SSD at 550K 4K random write IOPs at 0.6msec to a single Optane drive.
We attended a
The challenge that Infinidat has is how to perform as well as (or better than) an all flash array when you have hybrid flash – disk storage.
Fortunately, DRAM has a random access time of ~100 nsec which is 3 orders of magnitude better than flash. A manager I had once said everyone wants their data stored on tape but accessed out of memory.
So, for a storage systems in open system environments to average an 90% DRAM read hit cache hit rate is unheard of, and only seen for brief intervals at best, with especially well behaved applications and not under virtualization. For a customer to see an average DRAM hit rate exceeding 90%, over the course of multiple days was inconceivable,.
It all starts with writes. When data is written to Infinidat, it records a terrain map of all the other data that has been written recently. I suppose one could think of this as a 2 dimensional map, with spots on the map being the equivalent of data in the storage system that have recently been written. This map changes over time so it’s more like a movie stream of frames showing, from frame to frame all the recently written data in the system at any point in time. Of course the frame rate for this stream is the IO rate.
In any case, any system could do this sort of caching algorithm, iff they had the processing power needed, had the metadata layout which made recording the IO stream frame by frame space efficient, had the metadata indexing which would enable them to locate the last frame a record was written in AND had the IO parallelism required to do a whole lot of IO all the time to keep that DRAM cache filled with hit candidates. Did I mention that Infinidat uses a three controller storage system, unlike the rest of the industry that uses a two controller system. This gives them 50% more horse power and data paths to get data into cache.
At
They have some specially, designed, optimized code paths. For example, standard RAID TP algorithms perform RAID protection at 2.3GB/sec or 4.5GB/s but Huawei OceanStor Dorada 18000F can perform triple RAID calculations at 6.5GB/s. Similarly, standard LZ4 data compression algorithms can compress data at ~507MB/sec (on email) but Huawei’s data compression algorithm can perform compression (on email) at ~979MB/s. Ditto for CRC16 (used to check block integrity). Traditional CRC16 algorithms operate at ~2.3GB/sec but Hauwei can sustain ~7.2GB/s.
NetApp announced this week that their latest generation AFF (All Flash FAS) systems will support FC NVMeoF. We asked if this was just for NVMe SSDs or did it apply to all AFF media. The answer was it’s just another host interface which the customer can license for NVMe SSDs (available only on AFF F800) or SAS SSDs (A700S, A700, and A300). The only AFF not supporting the new host interface is their lowend AFF A220.
They also christened their new Data Visualization Center (DVC) and we had a multi-course meal at the Bistro at the center. The DVC had a wrap around, 1.5 floor tall screen which showed some of NetApp customer success stories. Inside the screen was a more immersive setting and there was plenty of VR equipment in work spaces alongside customer conference rooms.
The hardware
With Axellio using all NVMe SSDs, we expect high IO performance. Further, they are measuring IO performance from internal to the CPUs on the Axellio server nodes. X-IO says the Axellio can hit >12Million IO/sec with at 35µsec latencies with 72 NVMe SSDs.
Other sessions at X-IO include: Richard Lary, CTO X-IO Technologies gave a very interesting presentation on an mathematically optimized way to do data dedupe (caution some math involved); Bill Miller, CEO X-IO Technologies presented on edge computing’s new requirements and Gavin McLaughlin, Strategy & Communications talked about X-IO’s history and new approach to take the company into more profitable business.
The first time I saw similar technology was at
The TWG identified a three prong approach to NVDIMM adoption: crawl, walk, run (see
