68: GreyBeards talk NVMeoF/TCP with Ahmet Houssein, VP of Marketing & Strategy @ Solarflare Communications

In this episode we talk with Ahmet Houssein, VP of Marketing and Strategic Direction at Solarflare Communications, (@solarflare_comm). Ahmet’s been in the industry forever and has a unique view on where NVMeoF needs to go. Howard had talked with Ahmet at last years FMS. Ahmet will also be speaking at this years FMS (this week in Santa Clara, CA)..

Solarflare Communications sells Ethernet communication gear, mostly to the financial services market and has developed a software plugin for the standard TCP/IP stack on Linux that supports both target and client mode NVMeoF/TCP. That is, their software plugin provides a complete implementation of NVMeoF across TCP Ethernet that extends the TCP protocol but doesn’t require RDMA (RoCE or iWARP) or data center bridging.

Implementing NVMeoF/TCP

Solarflare’s NVMeoF/TCP is a free plugin that once approved by the NVMe(oF) standard’s committees anyone can use to create a NVMeoF storage system and consume that storage from almost anywhere. The standards committee is expected to approve the protocol extension soon and sometime after that the plugin will be added to the Linux Kernel. After standards approval, maybe VMware and Microsoft will adopt it as well, but may take more work.

Over the last year plus most NVMeoF/Ethernet we encounter requires sophisticated RDMA hardware. When we talked with Pavilion Data Systems, a month or so ago, they had designed a more networking like approach to NVMeoF using RoCE and TCP a special purpose FPGA that’s used in their RDMA NICs and Mellanox switches to support client-target mode NVMeoF/UDP [updated 8/8/18 after VR’s comment, the ed.]. When we talked with Attala Systems, they had special purpose FPGA that’s used in RDMA NICs and Mellanox switches to support target & client mode NVMeoF/UDP were using standard RDMA NICs and Mellanox switches to support their NVMeoF/Ethernet storage [updated 8/8/18 after VR’s comment, the ed.].

Solarflare is taking a different tack.

One problem with the NVMeoF/Ethernet RDMA is compatibility. You can use either RoCE or iWARP RDMA NICs but at the moment you can’t use both. With TCP/IP plugins there’s no hardware compatibility issue. (Yes, there’s software compatibility at both ends of the pipe).

SolarFlare recently measured latencies for their NVMeoF/TCP (Iometer/FIO) which shows that the with the protocol running it adds about a 5-10% increase in latency versus running RDMA NVMeoF/UDP-RoCE-iWARP.

Performance measurements were taken using a server, running Red Hat Linux + their TCP plugin with NVMe SSDs on the storage side and a similar configuration on the client side without the SSDs.

If they add 10% latency to 10 microsec. IO (for Optane), latency becomes 11 microsec. Similarly for flash NVMe SSDs it moves from 100 microsec to 110 microsec.

Ahmet did mention that their NICs have some hardware optimizations which brings down this added latency into something approaching closer to 5%. And later we discuss the immense parallelism opportunities using the TCP stack in user space. Their hardware also better supports more threads doing IO in parallel.

Why TCP

Ahmets on a mission. He says there’s this misbelief that Ethernet RDMA hardware is required to achieve lightening fast response times using NVMeoF, but it’s not true. Standard TCP with proper protocol enhancements is more than capable of performing at very close to the same latencies as RDMA, without special NICs and DCB switch configurations.

Furthermore, TCP/IP already has multipathing support. So current high availability characteristics of TCP are readily applicable to NVMeoF/TCP

Parallelism through user space

NVMeoF/TCP was the subject of 1st half of our discussion but we spent the 2nd half talking about scaling or parallelism. Even if you can do 11 or 110 microsecond latency at some point, if you do enough of these IOs, the kernel overhead in processing blocks and transferring control from kernel space to user space will become a bottleneck.

However, there’s nothing stopping IT from running the TCP/IP stack in user space and eliminating any kernel control transfer whatsoever. By doing so, data centers could parallelize all this IO using as many cores as available.

Running the plugin in a TCP/IP stack in user space allows you to scale NVMeoF lightening fast IO to as many users as you have user spaces or cores, and the kernel doesn’t even break into a sweat

Anyone could simply download Solarflare’s plugin, configure a white box server with Linux and 24 NVMe SSDs and support ~8.4M IOPS (350Kx24) at ~110 microsec latency And with user space scaling, one could easily have 1000s of user spaces connected to it.

They’re going to need need faster pipes!

The podcast runs ~39 minutes. Ahmet was very knowledgeable about NVMe, NVMeoF and TCP.  He was articulate and easy to talk with.  Listen to the podcast to learn more.

Ahmet Houssein, VP of Marketing and Strategic Direction at Solarflare Communications 

Ahmet Houssein is responsible for establishing marketing strategies and implementing programs to drive revenue growth, enter new markets and expand brand awareness to support Solarflare’s continuous development and global expansion.

He has over twenty-five years of experience in the server, storage, data center and networking industry, and held senior level executive positions in product development, marketing and business development at Intel and Honeywell. Most recently Houssein was SVP/GM at QLogic where he successfully delivered first to market with 25Gb Ethernet products securing design wins at HP and Dell.

One of the key leaders in the creation of the INFINIBAND and PCI-Express industry standard, Houssein is a recipient of the Intel Achievement Award and was a founding board member of the Storage Network Industry Association (SNIA), a global organization of 400 companies in the storage market. He was educated in London, UK and holds an Electrical Engineering Degree equivalent.

67: GreyBeards talk infrastructure monitoring with James Holden, Sr. Prod. Mgr. NetApp

Sponsored by: Howard and I first talked with James Holden, NetApp Senior Product Manager for OnCommand Insight and Cloud Insights,  last month, at Storage Field Day 16 (SFD16) in Waltham, MA. At the time, we thought it would be great to also have him on the show.

James has been with the NetApp OnCommand Insight (OCI) team for quite awhile now and is very knowledgeable about the product and its technology. NetApp Cloud Insights is a new SaaS offering that provides some of the same services as OCI without the footprint, focused on newer, non-traditional applications and available on a pay as you go model.

NetApp OnCommand Insight (OCI)

NetApp OCI is sort of a stripped down, souped up enterprise SRM tool, without storage and servers configuration-provisioning (see James’s introduction video from SFD15 for more info). It supports NetApp and just about anyone’s storage including Dell EMC, IBM, Hitachi Vantara (HDS), HPE, Infinidat, and Pure Storage as well as most major OSs such as VMware vSphere, Microsoft HyperV, RHEL, etc. Other storage can easily be  added to OCI through a patch/minor update and is typically done by customer request.

NetApp OCI currently runs in some of the biggest enterprises  in the world today, including top F500 companies and one of the world’s largest banks. OCI is agentless but does use a data collector server/VM onprem or in cloud that takes advantage of storage and system APIs to gather data.

OCI provides extensive end-to-end infrastructure monitoring and trouble shooting (see James’s SFD16 OCI monitoring & troubleshooting session). OCI monitors application workloads from VMs to the storage supporting them.

OCI also supplies extensive charge back capabilities (see his SFD16 OCI cost control/chargebacks session). In times like these when IT competes with public cloud offerings every day, charge backs can be very illuminating.

Also, OCI has extensive integration with ServiceNOW and similar offerings (see SFD16 OCI ecosystem session). With this level of integration, OCI can provide seamless tracking of service requests from initiation to completion through verification.

In addition, OCI can monitor public cloud infrastructure as well as onprem. For example, with Amazon Web Services (AWS), customers can use OCI to monitor EC2 instances EBS IO activity. OCI reports on AWS IOPS rates by EC2-EBS connection. Customers paying for EBS IOPS, can use OCI to monitor and tailor their EBS costs. OCI also supports Microsoft Azure environments.

NetApp Cloud Insights

NetApp Cloud Insights, a new SaaS offering, that is currently in Public Preview status but is expected to release in October, 2018 (checkout his SFD16 Cloud Insights session video).

Customers can currently register to use the preview version at Cloud.netapp.com/Cloud Insights. There’s a registration wall but that’s all it takes to get started. .

The minimum Cloud Insights instance is a single server and 5TB of storage. Unlike OCI, Cloud Insights is tailored to support smaller shops without significant infrastructure. However, Cloud Insight also offers standard onprem enterprise infrastructure monitoring as well.

Cloud Insights is also focused on modern, cloud-native applications whether they operate on prem or in the cloud. The problem with cloud native, container apps is that they come and go in seconds, and there’s thousands of them. Cloud Insights was designed specifically for container and other cloud native applications and as such, should provide a more accurate monitoring of operations for these systems.

We talked about Cloud Insight’s development cadence. James said that because it’s a SaaS offering new Cloud Insights functionality can be released daily, if not more frequently. Contrast that with OCI, where they schedule 3-4 releases a year.

Cloud Insight currently supports the Kubernetes container ecosystems today but more are on the way. Again, customers determine which Container or other cloud native ecosystems will be supported next.

The podcast runs ~22 minutes. James was very knowledgeable about OCI, Cloud Insights and infrastructure monitoring in general and he was easy to talk with. Howard and I had a great time at SFD16 and enjoyed our time talking with him again on the podcast.  Listen to the podcast to learn more.

James Holden, Senior Product Manager NetApp OCI and Cloud Insights 

 

James Holden is a Senior Manager of Product Management at NetApp, and for the last 5 years  has been building the infrastructure monitoring and reporting tool OnCommand Insight.

Today he is working across NetApp’s Cloud Analytics portfolio, including Cloud Insights, a new SaaS offering currently in preview.

Prior to NetApp, James worked for 14 years at CSC in both the US and the UK on their storage, compute and automation solutions.