SCI’s latest SPC performance report as of August’17

In $/IOPS, ETERNUS DX200-S4, Huawei FusionStorage, Huawei OceanStor Dorado 6000 v3, IOPS, LRT, SPC, SPC-1, Storage DS4200 by Administrator0 Comments

This Storage Intelligence (StorInt™) dispatch covers Storage Performance Council (SPC) results[1]. There have been four new SPC1 v3 the Fujitsu ETERNUS DX200 S4, Huawei OceanStor™ Dorado 6000 v3, Huawei FusionStorage, and Lenovo DS4200 and there have been no new SPC-2 submissions since last quarter. Just about all of our SPC-1 (non-disk only) top ten performance charts have changed.

SPC-1 results

We begin our discussion with top ten SPC-1 IOPS™ shown in Figure 1.

Figure 1 Top 10 SPC-1 IOPS

In Figure 1, we can see the new Huawei FusionStorage came in at #2 with ~4.5M IOPS. The Huawei FusionStorage is the first benchmark submission to feature NVMe SSDs and shows up well in this raw IOPS workload.

Huawei FusionStorage had 132-1600GB NVMe SSD cards across 36 (33 storage and 3 storage-ZK) nodes. They also had a few SAS 400GB SSDs in the storage-ZK nodes but nothing compared to the NVMe cards.

It’s somewhat surprising that they weren’t able to generate more IOPS with that many NVMe cards but they were using 144-10GbE interfaces to connect the 21 servers driving the IOPS to the 36 storage/storage-ZK nodes. I assume Huawei FusionStorage was using the iSCSI protocol in their benchmark submission and not NVMeF.  But there was no details on this.

In Figure 2 we examine SPC-1 LRT (least response time) reported by SPC.

Figure 2 SPC-2 Top 10 SPC-1 LRT performance

In Figure 2, the Fujitsu DX200 S4 storage system came in a 3-way tie for 8th place at a 0.22 msec LRT. Recall that LRT, is the average IO response time at 10% of achievable maximum workload for the submission.

As far as I can tell the Fujitsu ETERNUS DX200 S4 is just a midrange storage solution, which used 16GFC connections for host and (I think) FC connections for backend storage. They did have 128GB of DRAM across the dual controllers and 20-400GB SSDs but I didn’t see anything else which might explain their superior (for midrange storage) LRT performance. It just goes to show you that good code makes a difference.

It’s somewhat surprising that the Huawei FusionStorage with NVMe SSDs didn’t do as well here but it could have something to do with their use of the iSCSI stack rather than FC.

Figure 3 shows the SPC-1 reported $/IOPS™ metric or SPC-1 Price-Performance™.

Figure 3 SPC-2 Top 10 SPC-1 $/IOPS

In Figure 3, the new Lenovo DS4200 came in at 3rd place with $0.14/IOPS and the Fujitsu ETERNUS DX200 S4 came in at #5 with a $0.21/IOPS. Recall that lower is better and AFA systems dominate this category these days, just like LRT and IOPS.

The Lenovo DS4200 used 12Gb SAS connections between the host and the storage, sported 12-400GB SSDs and 16GB of DRAM across the two controllers. But the surprise was their $13.6K price and by generating 100K IOPS, it seems like a real steel.

The 300K IOPS of the ETERNUS DX200 S4 at a price of $64K also doesn’t seem like a bad deal. But the added 6 SSDs in the DX200 vs. the Lenovo DS4200 and the ability of the DX200 to generate 3X the IOPS seems like even better a solution.

Significance

The new SPC-1 results were for v3 of the benchmark. SPC reports these results separately from v1, but here we combine results. The differences between the two versions are not that significant from our perspective and are more in the block size used rather than the workloads being generated.

Seeing NVMe SSDs starting to show up is exciting. They are fast but when behind an iSCSI interface there’s just not that much that can be done. Excelero using NVMe SSDs and NVMeF showed over 4M IOPS from 2 (small) NVMe SSDs at under 100microsec. But they were using 100GbE and RoCE/RDMA. Someday, we may start seeing NVMeF storage systems benchmarked in SPC-1 and then we will see another step up in IOPS and step down in latencies, until then we rely on FC and iSCSI.

As always, suggestions on how to improve any of our performance analyses are welcomed.

[Also we offer more block storage performance information plus our OLTP, Email and Throughput ChampionsCharts™ in our recently updated (November 2017) SAN Storage Buying Guide, or for more information on protocol performance results please see our recently updated (October 2017) SAN-NAS Storage Buying Guide, both of which are available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in July of 2017.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

[1] All SPC results available from http://www.storageperformance.org/home/ as of 27Aug2017

[2] Please see http://silvertonconsulting.com/cms1/product/san-storage-buying-guide/