SCI’s latest SPEC sfs2014 performance report as of December ’17

In Builds, Cisco, FAS Series, FAS8200, NetApp, ORT, SPECsfs, SPECsfs2014, Streams, SW Build workload, UCS S3260, VDA workload by AdministratorLeave a Comment

This Storage Intelligence (StorInt™) dispatch only covers SPEC sfs2014 benchmark[1] results. There have been two new SPEC sfs2014 submissions since our last report, one for SWBUILD, the NetApp FAS8200 with FlexGroup and the other for VDA, the Cisco UCS S3260 with MapR XD. As we still don’t have more than 10 solutions for any of these workloads, we continue to plot all submissions in our charts below.

SPEC SFS2014_swbuild (software build) results

SWBUILD workload simulates software release builds, essentially one build is a Unix “make” process against tens of thousands of source files and is a metadata intensive workload. Figure 1 shows the SWBUILD concurrent build counts.

Figure 1 SPEC SFS2014_swbuild number of concurrent software builds

Figure 1 shows the number of software concurrent build done during the maximum load on a system. The new number one is the NetApp FAS8200 with FlexGroup above with 520 SW Builds. The NetApp was a hybrid solution using 72-4TB 7200 disk drives, 4TB of FlashCache and 256GB of DRAM per HA pair, with two HA pairs in their submission. The FAS8200 cluster supported 0.3PB of file storage and used 16 10GbE links for intercluster traffic and 36 10GbE links for host IO.

This SWBUILD result was pretty impressive considering that the NetApp solution was a standard NetApp ONTAP Cluster. ONTAP FlexGroup allows a single namespace to span 20PB of storage and supports up to 400 Billion files.

Next, we show the Minimum and Overall Response Time for SWBUILD submissions in Figure 2.

Figure 2 SPEC sfs2014_swbuild Minimum and Overall Response Times

Figure 2 clearly shows the NetApp FAS8200 came in #2 in ORT which is the average latency during the entire benchmark run which ranged from 10% of peak workload all the way up to 100% of peak workload. In contrast, the #1 ORT solution, the SPEC sfs reference system only managed a maximum of ~5% of the NetApp FAS8200 cluster builds. The WekaIO was the only submission that came close the the build ops/sec that the NetApp did, coming in with 500 builds and it only managed a 3.06 msec ORT.

SPEC SFS2014_vda (video data acquisition) results

In Figure 3 we show the maximum number of concurrent VDA streams.

Figure 3 SPEC SFS2014_vda Streams

In Figure 3, the Cisco UCS S3260 with MapR XD file system came in as the new #1 supporting 2070 VDA streams. The new Cisco solution had 12 storage servers with a total of 3TB (256GB/server) of DRAM cache and 192 (16/server) 8TB 7.2Krpm disk drives with a couple of SSDs for boot/OS devices. The Cisco MapR XD submission used 48-40GbE networking links. The MapR XD file system supported 1.3PB of file space.

As discussed in prior reports, VDA activity is heavy write sequential that favors disk only solutions.

In Figure 4, we show the minimum and overall response time metrics for VDA submissions.

Figure 4 SPEC sfs2014_vda Minimum and Overall Response Time Results

In Figure 4, for disk-only storage, the Cisco UCS with MapR XD file system has the best minimum (6.7 msec.) and ORT (12.94msec.) but this is still much worse than the two AFA solutions (IBM Spectrum Scale with Deep Flash or with FlashSystem 900).

However, the fact is both disk only UCS solutions managed to beat all the other AFA systems in max VDA streams attained and for VDA activity high throughput is probably more critical than response time.

Significance

We are pleased to see the new Cisco UCS and NetApp submissions. At this point, there are three Cisco UCS solutions in the VDA benchmark. The NetApp FAS8200 (SWBUILD) together with the Oracle ZFS (VDA) are the first enterprise NAS submissions in SPEC sfs2014 benchmarks. Enterprise NAS solutions were much more prominent in the previous generation SPEC sfs2008 and sfs97_R1 benchmarks.

There’s a new SPECsfs 2014 benchmark workload that showed up this quarter for the first time. It’s for EDA (electronic design automation). So far there are no submissions but it would seem to cater to the types of systems we have already seen in SWBUILD and VDA. Still no idea why there are no new submissions for the DATABASE and VDI workloads. Both of these are more enterprise type of workloads so you’d think the major storage vendors with their NAS solutions would submit their enterprise solutions for these workloads but so far there’s only on submission for each (SPEC reference).

Still trying to determine the best way to report SPEC sfs2014 results. At some point, when there’s enough submissions we plan to show top ten charts like we use for other performance reports. But in the meantime, we may experiment with a few variants of the above charts. Any ideas on other metrics of interest to report, please do let us know

Any suggestions on how to improve any of our performance analyses are always welcomed.

[Also we offer more file storage performance information plus our NFS and CIFS/SMB  ChampionsCharts™ charts in our recently updated (September 2018) NAS Storage Buying Guide available for purchase on our website.]

[This performance dispatch was originally sent out to our newsletter subscribers in December of 2017.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) and we will send our current issue along with download instructions for this and other reports. Dispatches are posted to our website at least a quarter or more after they are sent to our subscribers, so if you are interested in current results please consider signing up for our newsletter.]  

Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community

[1] All SPEC SFS2014 information is available at https://www.spec.org/sfs2014/  as of 20Dec2017

Leave a Reply