097: GreyBeards talk open source S3 object store with AB Periasamy, CEO MinIO

Ray was at SFD19 a few weeks ago and the last session of the week (usually dead) was with MinIO and they just blew us away (see videos of MinIO’s session here). Ray thought Anand Babu (AB) Periasamy (@ABPeriasamy), CEO MinIO, who was the main presenter at the session, would be a great invite for our GreyBeards podcast. Keith and I had a ball talking with AB.

Why object store

There’s something afoot in object storage space over the last year or so. It seems everybody is looking to deploy object store whether that be on prem, in CoLo facilities and in the cloud. It could be just the mass of data coming online but that trend has remained the same for years no. No it’s something else.

It all starts with AWS and S3. Over the last couple of years AWS has been rolling out new functionality that only works with S3 and this has been driving even more adoption of S3 as well as other object storage solutions.

S3 compatible object stores are available in just about every cloud service, available from major (and minor) storage vendors and in open source from MinIO.

Why S3 is so popular

Because object store is accessed via RestFUL interfaces, traditionally most implementations used their own API to access it. But when AWS created S3 (simple storage service) with their own API/SDK to access it, it somehow became the de-facto standard interface for all other object stores. S3 compatibility became a significant feature that all object stores had to support.

Sometime after that MinIO came into existence. MinIO provides a 100% open source, fully AWS S3 compatible object store that you can run anywhere on prem, in CoLo facilities and indeed in the cloud. In fact, there exist customers that run MinIO in AWS AB says this is probably just customers using a packaged software solution which happens to include MinIO but it’s nonetheless more expensive than AWS S3 as it uses EC2 instances and EBS storage to create an object store

Customers can access MinIO object stores with the AWS S3 SDK or the MinIO SDK. and you can access AWS S3 storage with AWS S3 SDK or use MinIO SDK. Occosionally, AWS S3 updates have broken MinIO’s SDK but these have been later fixed by AWS. It seems AWS and MinIO are on good terms.

AB mentioned that as customers get up to a few PBs of AWS S3 storage they often find the costs to be too high. It’s at this point that they start looking at other object storage solutions. But because MinIO is 100% S3 compatible and it’s open source many of these customers deploy it in their own data center facilities or in colo environments.

For those customers that want it, MinIO also offers an S3 gateway. With the gateway on prem customers can use S3 or standard file services to access S3 object storage located in the cloud. The gateway also works in the public cloud and can support both AWS s3 as well as Microsoft Blob storage as a backend.

MinIO matches AWS S3 features

AWS S3 has a number of great features and MinIO has matched or exceeded them all, step by step. AWS S3 has cross region replication options where customers can replicate S3 data from one region to another. MinIO supports both asynchronous replication of S3 data and synchronous replication (using RADIO).

But MinIO adds support for erasure coding within a fault domain. Default is Nx2 erasure coding which duplicates all your data so as long as 1/2 of your servers and storage are available you continue to have access to all your data. But this can be configured down like 12+4 where data is split accross 16 servers any four of which can fail and you can still access data.

AWS customers can use a Snowball (standalone storage device) to transfer data to or from S3 storage. AWS Snowball implements a subset of S3 API and requires a NAS staging area of equivalent size to migrate data out of S3. MinIO has support for Snowball’s limited S3 API and as such, Snowball’s can be used to migrate data into or out of MinIO. MinIO has a blog post which describes their support for AWS Snowball.

AWS also offers S3 Lambda services or server less computing services where compute services can be invoked when data is loaded in a bucket and then turned off when no longer needed. AWS Lambda depends on AWS messaging and other services to work properly. But MinIO supports Lambda like functionality using other open source services. AB mentions MQTT and Kafka services. MinIO has another blog post discussing their Lambda like services based on Kafka.

AWS recently implemented Snowflake a SQL database server for unstructured data that uses S3 storage to hold data. Ray and Keith almost choked on that statement as unstructured data and databases never used to be uttered in the same breath. But what AWS has shown was that you can use object store for database data as long as you are willing to load the table into memory and process it there and then unload any modified table data back into the object store. Indexing of the object data seems to be done as the data is being loaded and is also being done in a (random IO) cache or in memory and once done can also be unloaded into the object store.

Now Snowflake uses S3 but it’s not available on prem. MinIO has a number of data base partners that make use of their object store as a backend to host a Snowflake like service onprem. AB mentioned Spark and Splunk but there are others as well.

We ended up the discussion with what does it mean to have 20K stars on GitHub. AB said if you did a java script getting 20K stars would be easy but you just don’t see this sort of open source popularity for storage systems. He said the number is interesting but the growth rate is even more interesting.

The podcast runs ~47 minutes. AB was a great to talk tech with. Keith and I could have talked all afternoon with AB. It was very hard to stop the recording as we could have talked with him for another hour or more. AB said he doesn’t like to do podcasts or videos but he had no problem with us firing away questions. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Anand Babu Periasamy, CEO MinIO

AB Periasamy is the CEO and co-founder of MinIO. One of the leading thinkers and technologists in the open source software movement, AB was a co-founder and CTO of GlusterFS which was acquired by RedHat in 2011. Following the acquisition, he served in the office of the CTO at RedHat prior to founding MinIO in late 2015. AB is an active angel investor and serves on the board of H2O.ai and the Free Software Foundation of India.

He earned his BE in Computer Science and Engineering from Annamalai University.

90: GreyBeards talk K8s containers storage with Michael Ferranti, VP Product Marketing, Portworx

At VMworld2019 USA there was a lot of talk about integrating Kubernetes (K8s) into vSphere’s execution stack and operational model. We had heard that Portworx was a leader in K8s storage services or persistent volume support and thought it might be instructive to hear from Michael Ferranti (@ferrantiM), VP of Product Marketing at Portworx about just what they do for K8s container apps and their need for state information.

Early on Michael worked for RackSpace in their SaaS team and over time saw how developers and system engineers just loved container apps. But they had great difficulty using them for mission critical applications and containers of the time had a complete lack of support for storage. Michael joined Portworx to help address these and other limitations in using containers for mission critical workloads.

Portworx is essentially a SAN, specifically designed for containers. It’s a software defined storage system that creates a cluster of storage nodes across K8s clusters and provides standard storage services on a container level granularity.

As a software defined storage system, Portworx is right in the middle of the data path, storage they must provide high availability, RAID protection and other standard storage system capabilities. But we talked only a little about basic storage functionality on the podcast.

Portworx was designed from the start to work for containers, so it can easily handle provisioning and de-provisioning, 100s to 1000s of volumes without breaking a sweat. Not many storage systems, software defined or not, can handle this level of operations and not impact storage services.

Portworx supports both synchronous and asynchronous (snapshot based) replication solutions. As all synchronous replication, system write performance is dependent on how far apart the storage nodes are, but it can provide RPO=0 (recovery point objective) for mission critical container applications.

Portworx takes this another step beyond just data replication. They also replicate container configuration (YAML) files. We’re no experts but YAML files contain an encapsulation of everything needed to understand how to run containers and container apps in a K8s cluster. When one combines replicated container YAML files, replicated persistent volume data AND an appropriate external registry, one can start running your mission critical container apps at a disaster site in minutes.

Their asynchronous replication for container data and configuration files, uses Portworx snapshots , which are sent to an alternate site. But they also support asynch replication to any S3 compatible storage via CloudSnap.

Portworx also supports KubeMotion, which replicates/copies name spaces, container app volume data and container configuration YAML files from one K8s cluster to another. This way customers can move their K8s namespaces and container apps to any other Portworx K8s cluster site. This works across on prem K8s clusters, cloud K8s clusters, between public cloud provider K8s clusters s or between on prem and cloud K8s clusters.

Michael also mentioned that data at rest encryption, for Portworx, is merely a tick box on a storage class specification in the container’s YAML file. They make use use of KMIP services to provide customer generated keys for encryption.

This is all offered as part of their Data Security/Disaster Recovery (DSDR) service. that supports any K8s cluster service whether they be AWS, Azure, GCP, OpenShift, bare metal, or VMware vSphere running K8s VMs.

Like any software defined storage system, customers needing more performance can add nodes to the Portworx (and K8s) cluster or more/faster storage to speed up IO

It appears they have most if not all the standard storage system capabilities covered but their main differentiator, besides container app DR, is that they support volumes on a container by container basis. Unlike other storage systems that tend to use a VM or higher level of granularity to contain container state information, with Portworx, each persistent volume in use by a container is mapped to a provisioned volume.

Michael said their focus from the start was to provide high performing, resilient and secure storage for container apps. They ended up with a K8s native storage and backup/DR solution to support mission critical container apps running at scale. Licensing for Portworx is on a per host (K8s node basis).

The podcast ran long, ~48 minutes. Michael was easy to talk with, knew K8s and their technology/market very well. Matt and I had a good time discussing K8s and Portworx’s unique features made for K8s container apps. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Michael Ferranti, VP of Product Marketing, Portworx

Michael (@ferrantiM) is VP of Product Marketing at Portworx, where he is responsible for communicating the value of containerization and digital transformation to global architects and CIOs.

Prior to joining Portworx, Michael was VP of Marketing at ClusterHQ, an early leader in the container storage market and spent five years at Rackspace in a variety of product and marketing roles

83: GreyBeards talk NVMeoF/TCP with Muli Ben-Yehuda, Co-founder & CTO and Kam Eshghi, VP Strategy & Bus. Dev., Lightbits Labs

This is the first time we’ve talked with Muli Ben-Yehuda (@Muliby), Co-founder & CTO and Kam Eshghi (@KamEshghi), VP of Strategy & Business Development, Lightbits Labs. Keith and I first saw them at Dell Tech World 2019, in Vegas as they are a Dell Ventures funded organization. The company has 70 (mostly engineering) employees and is based in Israel, with offices in NY and the Valley as well as elsewhere around the world. Kam was previously with (Dell) EMC DSSD and Muli’s spent years as a Master Inventor with IBM Research.

[This was Keith Townsend’s (@CTOAdvisor & The CTO Advisor), first time as a GreyBeard co-host and we had a great time with him on the show.]

I would have to say it was a far ranging discussion but focused on their software defined, NVMeoF/TCP storage. As you may recall we talked with Solarflare Communications last year who were also working on a NVMeoF/TCP, only in their case it was an accelerator board. After the recording, Muli said the hardware accelerator they have is their own design.

Why NVMeoF/TCP?

Most NVMeoF today, that uses Ethernet, requires RoCE or iWARP compatible NICs and switches. Lightbits Labs has long been active in the NVMeoF/RoCE-iWARP market place. Early on they noticed that enterprise and cloud service providers were reluctant to adopt NVMeoF technology because of the need to change out all their networking equipment to use it. This is what brought about their focus on NVMeoF/TCP.

The advantage of NVMeoF/TCP is that it can be run on any Ethernet NIC and switch available today. From Muli’s perspective, NVMeoF/TCP is going to become the next SAN of choice for the data center. They were active, early on, in the standards committee to push for NVMeoF/TCP adoption.

How does it work?

Their software defined solution runs LightOS® storage software, a Linux based package, and uses off the shelf, server hardware with persistent storage (Optane DC PM/SSDs, NV DIMMs, V-NAND, etc.). They use persistent memory for a FAST write buffer and a place where they can “mold” the written data into something that can be better written to backend NVMe SSDs.

One surprise about Lightbits solution is that it offers a decent set of data services. These include erasure coding, thin provisioning, wire-speed inline compression, QoS and wide striping. It seems like any of these can be disabled by a customers want. But they only add very little overhead. I think Muli mentioned one Lightbits customer with encrypted data that disabled compression.

Lightbits also offers a global FTL (flash translation layer), which means they control SSD addressing which maps data to physical/raw NAND locations at the storage system level. If done well, a global FTL can help improve flash endurance and may offer better write performance (through increased parallelism).

Lightbits claim to inline, wire speed data compression is premised on the use of more current CPUs with high (>=28) core counts in a storage server. If the storage server has older CPUs (<28 cores), they suggest you install their LightField™ hardware accelerator add in card. LightField offers a number of hardware based, performance accelerations in addition to compression speedups.

LightOS requires no host (client) software. Muli’s a long time Linux kernel contributor and indicated that the only thing LightOS needs is a current Linux Kernel (5.0 or later) which has the NVMeoF/TCP driver software (and persistent memory). Lightbits believes that it’s only a matter of time until other OSs also implement NVMeoF/TCP drivers.

Lightbits business considerations

Long term, Lightbits sees a need for compute-storage disaggregation in hyper scalar and enterprise cloud environments. Early on it was relatively easy to replicate servers with DAS storage but as NVMe SSDs came out the expense to do this throughout their >>1000 server environment starts to become exorbitant. If they only had an easy way to disaggregate their storage from compute and still enjoy all the performance advantages of DAS NVMe SSDS. With LightOS they can do that.

Lightbits can be sold today through Dell, as a partner solution, which means that Dell can integrate, test and validate their servers with LightField accelerator card and deliver that package to your data center. I believe you still need to purchase and install their LightOS software yourself.

Lightbits charges for LightOS software on a per storage node basis, but they have different charges based on the maximum number of NVMe SSD slots available is in a server. There is no capacity charge. They also offer worldwide service and support for LightOS software and LightField hardware.

It’s all about performance

From a performance perspective, one Fortune 500 hyper-scalar benchmarked their storage solution against a DAS NVMe server and found it added about 30 µsec to the IO latency as compare to DAS NVMe SSDs. From their perspective, the added data services, better endurance, and disaggregated compute-storage environment provided by LightOS more than made up for the additional overhead.

Finally, I asked about whether multiple LightOS storage servers could be clustered together. Muli intervened, after stating some legal stuff, said they were working on the next generation LightOS and it will support clustered storage servers, local data replication as well as distributed (across storage servers) erasure coding.

The podcast is a long one and runs over ~47 minutes. There was a lot to talk about and Kam and Muli seem to know it all. It was interesting to hear the history of their pivot to TCP. They seem to have the right technology to address the market. Listen to the podcast to learn more.

Muli Ben-Yehuda, Co-founder and CTO, Lightbits Labs

Muli Ben-Yehuda is the CTO and Co-Founder of Lightbits Labs, where he leads technological developments.

Prior to founding Lightbits, he was chief scientist at Stratoscale and a researcher and Master Inventor at IBM Research.

He holds an M.Sc. in Computer Science (summa cum laude) from the Technion — Israel Institute of Technology and a B.A. (cum laude) from the Open University of Israel.

He is a long time Linux kernel contributor and his code and ideas are most likely included in an operating system or hypervisor running near you. He is also one of the authors of the NVMe/TCP standard and technology. 

Kam Eshghi, VP Strategy & Business Development, Lightbits Labs

Kam joined Lightbits Labs from Dell EMC and has over 20yrs of experience in strategic marketing and business development with startups and public companies.

Most recently as VP of strategic alliances at startup DSSD, Kam led business development with technology partners and developed DSSD’s partnership with EMC, leading to EMC’s acquisition of DSSD.

Previously as Sr. Director of Marketing & Business Development at IDT, Kam built their NVMe Controller business from scratch. Previous to that, Kam worked in data center storage, compute and networking markets at HP, Intel, and Crosslayer Networks. 

Kam is a U.C. Berkeley and MIT graduate with a BS and MS in Electrical Engineering and Computer Science and an MBA.

75: GreyBeards talk persistent memory IO with Andy Grimes, Principal Technologist, NetApp

Sponsored By:  NetApp
In this episode we talk new persistent memory IO technology  with Andy Grimes, Principal Technologist, NetApp. Andy presented at the NetApp Insight 2018 TechFieldDay Extra (TFDx) event (video available here). If you get a chance we encourage you to watch the videos as Andy, did a great job describing their new MAX Data persistent memory IO solution.

The technology for MAX Data came from NetApp’s Plexistor acquisition. Prior to the acquisition, Plexistor had also presented at a SFD9 and TFD11.

Unlike NVMeoF storage systems, MAX Data is not sharing NVMe SSDs across servers. What MAX Data does is supply an application-neutral way to use persistent memory as a new, ultra fast, storage tier together with a backing store.

MAX Data performs a write or an “active” (Persistent Memory Tier) read in single digit µseconds for a single core/single thread server. Their software runs in user space and as such, for multi-core servers, it can take up to 40  µseconds.  Access times for backend storage reads is the same as NetApp AFF but once read, data is automatically promoted to persistent memory, and while there, reads ultra fast.

One of the secrets of MAX Data is that they have completely replaced the Linux Posix File IO stack with their own software. Their software is streamlined and bypasses a lot of the overhead present in today’s Linux File Stack. For example, MAX Data doesn’t support metadata-journaling.

MAX Data works with many different types of (persistent) memory, including DRAM (non-persistent memory), NVDIMMs (DRAM+NAND persistent memory) and Optane DIMMs (Intel 3D Xpoint memory, slated to be GA end of this year). We suspect it would work with anyone else’s persistent memory as soon as they come on the market.

Even though the (Optane and NVDIMM) memory is persistent, server issues can still lead to access loss. In order to provide data availability for server outages, MAX Data also supports MAX Snap and MAX Recovery. 

With MAX Snap, MAX Data will upload all persistent memory data to ONTAP backing storage and ONTAP snapshot it. This way you have a complete version of MAX Data storage that can then be backed up or SnapMirrored to other ONTAP storage.

With MAX Recovery, MAX Data will synchronously replicate persistent memory writes to a secondary MAX Data system. This way, if the primary MAX Data system goes down, you still have an RPO-0 copy of the data on another MAX Data system that can be used to restore the original data, if needed. Synchronous mirroring will add 3-4  µseconds to the access time for writes, quoted above.

Given the extreme performance of MAX Data, it’s opening up whole new set of customers to talking with NetApp. Specifically, high frequency traders (HFT) and high performance computing (HPC). HFT companies are attempting to reduce their stock transactions access time to as fast as humanly possible. HPC vendors have lots of data and processing all of it in a timely manner is almost impossible. Anything that can be done to improve throughput/access times should be very appealing to them.

To configure MAX Data, one uses a 1:25 ratio of persistent memory capacity to backing store. MAX Data also supports multiple LUNs.

MAX Data only operates on Linux OS and supports (IBM) RedHat and CentOS, But Andy said it’s not that difficult to add support for other versions of Linux Distros and customers will dictate which other ones are supported, over time.

As discussed above, MAX Data works with NetApp ONTAP storage, but it also works with SSD/NVMe SSDs as backend storage. In addition, MAX Data has been tested with NetApp HCI (with SolidFire storage, see our prior podcasts on NetApp HCI with Gabriel Chapman and Adam Carter) as well as E-Series storage. The Plexistor application has been already available on AWS Marketplace for use with EC2 DRAM and EBS backing store. It’s not much of a stretch to replace this with MAX Data.

MAX Data is expected to be GA released before the end of the year.

A key ability of the MAX Data solution is that it requires no application changes to use persistent memory for ultra-fast IO. This should help accelerate persistent memory adoption in data centers when the hardware becomes more available. Speaking to that, at Insight2018, Lenovo, Cisco and Intel were all on stage when NetApp announced MAX Data.

The podcast runs ~25 minutes. Andy’s an old storage hand (although no grey beard) and talks the talk, walks the walk of storage religion. Andy is new to TFD but we doubt it will be the last time we see him there. Andy was very conversant on the MAX Data technology and the market that it apparently is opening up for this new technology.  Listen to our podcast to learn more.

Andy Grimes, Principal Technologiest, NetApp

Andy has been in the IT industry for 17 years, working in roles spanning development, technology architecture, strategic outsourcing and Healthcare..

For the past 4 years Andy has worked with NetApp on taking the NetApp Flash business from #5 to #1 in the industry (according to IDC). During this period NetApp also became the fastest growing Flash and SAN vendor in the market and regained leadership in the Gartner quadrant.

Andy also works with NetApp’s product vision, competitive analysis and future technology direction and working with the team bringing the MAX Data PMEM product to market.

Andy has a BS degree in psychology, a BPA in management information systems, and an MBA. He current works as a Principal Technologist for the NetApp Cloud Infrastructure Business Unit with a focus on PMEM, HCI and Cloud Strategy. Andy lives in Apex, NC with his beautiful wife and has 2 children, a 4 year old and a 22 year old (yes don’t let this happen to you). For fun Andy likes to Mountain Bike, Rock Climb, Hike and Scuba Dive.

69: GreyBeards talk HCI with Lee Caswell, VP Products, Storage & Availability, VMware

Sponsored by:

For this episode we preview VMworld by talking with Lee Caswell (@LeeCaswell), Vice President of Product, Storage and Availability, VMware.

This is the third time Lee’s been on our show, the previous one was back in August of last year. Lee’s been at VMware for a couple of years now and, among other things, is leading the HCI journey at VMware.

The first topic we discussed was VMware’s expanded HCI software defined data center (SDDC) solution, which now includes compute, storage, networking and enhanced operations with alerts/monitoring/automation that ties it all together.

We asked Lee to explain VMware’s SDDC:

  • HCI operates at the edge – with ROBO-2-server environments, VMware’s HCI can be deployed in a closet and remotely operated by a VI from the central site.
  • HCI operates in the data center – with vSphere-vSAN-NSX-vRealize and other software, VMware modernizes data centers for the  pace of digital business..
  • HCI operates in the public Cloud –with VMware Cloud (VMC)  on AWS, IBM Cloud and over 400 service providers, VMware HCI also operates in the public cloud.
  • HCI operates for containers and cloud native apps – with support for containers under vSphere, vSAN and NSX, developers are finding VMware HCI an easy option to run container apps in the data center, at the edge, and in the public cloud.

The importance of the edge will become inescapable, as 50B edge connected devices power IoT by 2020. Lee heard Pat saying compute processing is moving to the edge because of 3 laws:

  1. the law of physics, light/information only travels so fast;
  2. the law of economics, doing all processing at central sites would take too much bandwidth and cost; and
  3. the law(s) of the land, data sovereignty and control is ever more critical in today’s world.

VMware SDDC is a full stack option, that executes just about anywhere the data center wants to go. Howard mentioned one customer he talked with at FMS18, just wanted to take their 16 node VMware HCI rack and clone it forever, to supply infinite infrastructure.

Next, we turned our discussion to Virtual Volumes (VVols). Recently VMware added replication support for VVols. Lee said VMware has an intent to provide a SRM SRA for VVols. But the real question is why hasn’t there been higher field VVol adoption. We concluded it takes time.

VVols wasn’t available in vSphere 5.5 and nowadays, three or more years have to go by before a significant amount of the field moves to a new release. Howard also said early storage systems didn’t implement VVols right. Moreover, VMware vSphere 5.5 is just now (9/16/18) going EoGS.

Lee said 70% of all current vSAN deployments are AFA. With AFA, hand tuning storage performance is no longer something admins need to worry about. It used to be we all spent time defragging/compressing data to squeeze more effective capacity out of storage, but hand capacity optimization like this has become a lost art. Just like capacity, hand tuning AFA performance doesn’t make sense anymore.

We then talked about the coming flash SSD supply glut. Howard sees flash pricing ($/GB) dropping by 40-50%, regardless of interface. This should drive AFA shipments above 70%, as long as the glut continues.

The podcast runs ~21 minutes. Lee’s always great to talk with and is very knowledgeable about the IT industry, HCI in general, and of course, VMware HCI in particular.  Listen to the podcast to learn more.

Lee Caswell, V.P. of Product, Storage & Availability, VMware

Lee Caswell leads the VMware storage marketing team driving vSAN products, partnerships, and integrations. Lee joined VMware in 2016 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Prior to VMware, Lee was vice president of Marketing at NetApp and vice president of Solution Marketing at Fusion-IO. Lee was a founding member of Pivot3, a company widely considered to be the founder of hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at Adaptec, and SEEQ Technology, a pioneer in non-volatile memory. He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.