SMB2.2 (CIFS) screams over InfiniBand

Microsoft MVP Summit 2010 by David McCarter (cc) (From Flickr)
Microsoft MVP Summit 2010 by David McCarter (cc) (From Flickr)

I missed the MVP summit last month in Redmond, but I heard there was some more discussion of the Server Message Block v2.2 (SMB2.2, also known previously as CIFS) coming in Windows Server (R) 8.

The big news is SMB2.2 now supports RDMA and can use InfiniBand (announced at SNIA Developer Conference last fall). It also supports RDMA over Ethernet via RoCE (see my Intel buys Qlogic’s Infiniband post) and iWARP.

SMB2.2 over InfiniBand performance

As reported last fall at the SNIA Developer Conference SMB2.2 using RDMA over InfiniBand reached over 3.7GB/sec with no server configuration changes using two QDR cards and 160K IOPs (the IOPs are from an SQLIO run using 8KB IOs, not SPECsfs2008). The pre-beta, SMB2.2 code was running on commodity server hardware using 32Gbps InfiniBand links. I couldn’t find any performance numbers with ROCE or iWARP but I would suspect running on 10GbE these would be much slower than InfiniBand.

Hints are that performance gets even better with the released versions of the code coming out in Windows Server 8.

SMB2.2 gets even faster than NFS

We have noted in the past that SMB (CIFS) on average, shows better throughput (IOPS) performance than NFS in SPECsfs2008 results (for example, see our latest Chart-of-the-Month post on SPECsfs results). However, those results were all at best SMB2 or even SMB1 results, and commonly using Ethernet links.

NFS already supports InfiniBand but I am unsure whether it makes use of RDMA. Nevertheless, the significant speed up shown here for SMB2.2 will potentially take SPECsfs2008 SMB2.2 performance up to a whole new level.

Why InfiniBand?

As you may recall, InfinBand is primarily deployed as a server to server interface and used extensively in the past for high performance computing environments. However nowadays, we find storage clusters, such as EMC Isilon, HP X9000 (Ibrix), IBM XIV and others using InfiniBand for their inter-node communications. The use of InfiniBand in these storage clusters is probably due primarily to its superior latency over Ethernet.

But InfiniBand has another advantage, fast data throughput, when using RDMA it can transfer data faster than almost any other networking protocol alive today. SMB2.2 takes advantage of this throughput boost by using RDMA only for large blocks of data and avoiding it for smaller blocks of data. Not sure what the cutoff is, but this would certainly help in large SQL database queries, disk copies, and any other large file data transfer operations.

Of course with 56Gbps FDR InfiniBand available today and faster transfer rates coming (see IBTA roadmap), there appears to be every reason to believe that superior throughput performance will continue at least for the foreseeable future. Better latency is also certain to be retained as well

Now that Intel’s pushing it, Mellanox continuing to push Infiniband and storage cluster’s using it more frequently, we may start to see more storage protocols supporting it.

We thought that FC only had Ethernet to worry about, with SMB2.2 moving to InfiniBand, NFS already supporting it, can a fully functional FCoIB be far behind?

 

Storage throughput vs. IO response time and why it matters

Fighter Jets at CNE by lifecreation (cc) (from Flickr)
Fighter Jets at CNE by lifecreation (cc) (from Flickr)

Lost in much of the discussions on storage system performance is the need for both throughput and response time measurements.

  • By IO throughput I generally mean data transfer speed in megabytes per second (MB/s or MBPS), however another definition of throughput is IO operations per second (IO/s or IOPS).  I prefer the MB/s designation for storage system throughput because it’s very complementary with respect to response time whereas IO/s can often be confounded with response time.  Nevertheless, both metrics qualify as storage system throughput.
  • By IO response time I mean the time it takes a storage system to perform an IO operation from start to finish, usually measured in milleseconds although lately some subsystems have dropped below the 1msec. threshold.  (See my last year’s post on SPC LRT results for information on some top response time results).

Benchmark measurements of response time and throughput

Both Standard Performance Evaluation Corporation’s SPECsfs2008 and Storage Performance Council’s SPC-1 provide response time measurements although they measure substantially different quantities.  The problem with SPECsfs2008’s measurement of ORT (overall response time) is that it’s calculated as a mean across the whole benchmark run rather than a strict measurement of least response time at low file request rates.  I believe any response time metric should measure the minimum response time achievable from a storage system although I can understand SPECsfs2008’s point of view.

On the other hand SPC-1 measurement of LRT (least response time) is just what I would like to see in a response time measurement.  SPC-1 provides the time it takes to complete an IO operation at very low request rates.

In regards to throughput, once again SPECsfs2008’s measurement of throughput leaves something to be desired as it’s strictly a measurement of NFS or CIFS operations per second.  Of course this includes a number (>40%) of non-data transfer requests as well as data transfers, so confounds any measurement of how much data can be transferred per second.  But, from their perspective a file system needs to do more than just read and write data which is why they mix these other requests in with their measurement of NAS throughput.

Storage Performance Council’s SPC-1 reports throughput results as IOPS and provide no direct measure of MB/s unless one looks to their SPC-2 benchmark results.  SPC-2 reports on a direct measure of MBPS which is an average of three different data intensive workloads including large file access, video-on-demand and a large database query workload.

Why response time and throughput matter

Historically, we used to say that OLTP (online transaction processing) activity performance was entirely dependent on response time – the better storage system response time, the better your OLTP systems performed.  Nowadays it’s a bit more complex, as some of todays database queries can depend as much on sequential database transfers (or throughput) as on individual IO response time.  Nonetheless, I feel that there is still a large component of response time critical workloads out there that perform much better with shorter response times.

On the other hand, high throughput has its growing gaggle of adherents as well.  When it comes to high sequential data transfer workloads such as data warehouse queries, video or audio editing/download or large file data transfers, throughput as measured by MB/s reigns supreme – higher MB/s can lead to much faster workloads.

The only question that remains is who needs higher throughput as measured by IO/s rather than MB/s.  I would contend that mixed workloads which contain components of random as well as sequential IOs and typically smaller data transfers can benefit from high IO/s storage systems.  The only confounding matter is that these workloads obviously benefit from better response times as well.   That’s why throughput as measured by IO/s is a much more difficult number to understand than any pure MB/s numbers.

—-

Now there is a contingent of performance gurus today that believe that IO response times no longer matter.  In fact if one looks at SPC-1 results, it takes some effort to find its LRT measurement.  It’s not included in the summary report.

Also, in the post mentioned above there appears to be a definite bifurcation of storage subsystems with respect to response time, i.e., some subsystems are focused on response time while others are not.  I would have liked to see some more of the top enterprise storage subsystems represented in the top LRT subsystems but alas, they are missing.

1954 French Grand Prix - Those Were The Days by Nigel Smuckatelli (cc) (from Flickr)
1954 French Grand Prix - Those Were The Days by Nigel Smuckatelli (cc) (from Flickr)

Call me old fashioned but I feel that response time represents a very important and orthogonal performance measure with respect to throughput of any storage subsystem and as such, should be much more widely disseminated than it is today.

For example, there is a substantive difference a fighter jet’s or race car’s top speed vs. their maneuverability.  I would compare top speed to storage throughput and its maneuverability to IO response time.  Perhaps this doesn’t matter as much for a jet liner or family car but it can matter a lot in the right domain.

Now do you want your storage subsystem to be a jet fighter or a jet liner – you decide.