61: GreyBeards talk composable storage infrastructure with Taufik Ma, CEO, Attala Systems

In this episode,  we talk with Taufik Ma, CEO, Attala Systems (@AttalaSystems). Howard had met Taufik at last year’s FlashMemorySummit (FMS17) and was intrigued by their architecture which he thought was a harbinger of future trends in storage. The fact that Attala Systems was innovating with new, proprietary hardware made an interesting discussion, in its own right, from my perspective.

Taufik’s worked at startups and major hardware vendors in his past life and seems to have always been at the intersection of breakthrough solutions using hardware technology.

Attala Systems is based out of San Jose, CA.  Taufik has a class A team of executives, engineers and advisors making history again, this time in storage with JBoFs and NVMeoF.

Ray’s written about JBoF (just a bunch of flash) before (see  FaceBook moving to JBoF post). This is essentially a hardware box, filled with lots of flash storage and drive interfaces that directly connects to servers. Attala Systems storage is JBOF on steroids.

Composable Storage Infrastructure™

Essentially, their composable storage infrastructure JBOF connects with NVMeoF (NVMe over Fabric) using Ethernet to provide direct host access to  NVMe SSDs. They have implemented special purpose, proprietary hardware in the form of an FPGA, using this in a proprietary host network adapter (HNA) to support their NVMeoF storage.

Their HNA has a host side and a storage side version, both utilizing Attala Systems proprietary FPGA(s). With Attala HNAs they have implemented their own NVMeoF over UDP stack in hardware. It supports multi-path IO and highly available dual- or single-ported, NVMe SSDs in a storage shelf. They use standard RDMA capable Ethernet 25-50-100GbE (read Mellanox) switches to connect hosts to storage JBoFs.

They also support RDMA over Converged Ethernet (RoCE) NICS for additional host access. However I believe this requires host (NVMeoF) (their NVMeoY over UDP stack) software to connect to their storage.

From the host, Attala Systems storage on HNAs, looks like directly attached NVMe SSDs. Only they’re hot pluggable and physically located across an Ethernet network. In fact, Taufik mentioned that they already support VMware vSphere servers accessing Attala Systems composable storage infrastructure.

Okay on to the good stuff. Taufik said they measured their overhead and it was able to perform an IO with only an additional 5 µsec of overhead over native NVMe SSD latencies. Current NVMe SSDs operate with a response time of from 90 to 100 µsecs, and with Attala Systems Composable Storage Infrastructure, this means you should see 95 to 105 µsec response times over a JBoF(s) full of NVMe SSDs! Taufik said with Intel Optane SSD’s 10 µsec response times, they see response times at ~16 µsec (the extra µsec seems to be network switch delay)!!

Managing composable storage infrastructure

They also use a management “entity” (running on a server or as a VM),  that’s used to manage their JBoF storage and configure NVMe Namespaces (like a SCSI LUN/Volume).  Hosts use NVMe NameSpaces to access and split out the JBoF  NVMe storage space. That is, multiple Attala Systems Namespaces can be configured over a single NVMe SSD, each one corresponding to a single  (virtual to real) host NVMe SSD.

The management entity has a GUI but it just uses their RESTful APIs. They also support QoS on an IOPs or bandwidth limiting basis for Namespaces, to control manage noisy neighbors.

Attala systems architected their management system to support scale out storage. This means they could support many JBoFs in a rack and possibly multiple racks of JBoFs connected to swarms of servers. And nothing was said that would limit the number of Attala storage system JBoFs attached to a single server or under a single (dual for HA) management  entity. I thought the software may have a problem with this (e.g., 256 NVMe (NameSpaces) SSDs PCIe connected to the same server) but Taufik said this isn’t a problem for modern OS.

Taufik mentioned that with their RESTful APIs,  namespaces can be quickly created and torn down, on the fly. They envision their composable storage infrastructure to be a great complement to cloud compute and container execution environments.

For storage hardware, they use storage shelfs from OEM vendors. One recent configuration from Supermicro has hot-pluggable, dual ported, 32 NVMe slots in a 1U chasis, which at todays ~16TB capacities, is ~1/2PB of raw flash. Taufik mentioned 32TB NVMe SSDs are being worked on as we speak. Imagine that 1PB of flash NVMe SSD storage in 1U!!

The podcast runs ~47 minutes. Taufik took a while to get warmed up but once he got going, my jaw dropped away.  Listen to the podcast to learn more.

Taufik Ma, CEO Attala Systems

Tech-savvy business executive with track record of commercializing disruptive data center technologies.  After a short stint as an engineer at Intel after college, Taufik jumped to the business side where he led a team to define Intel’s crown jewels – CPUs & chipsets – during the ascendancy of the x86 server platform.

He honed his business skills as Co-GM of Intel’s Server System BU before leaving for a storage/networking startup.  The acquisition of this startup put him into the executive team of Emulex where as SVP of product management, he grew their networking business from scratch to deliver the industry’s first million units of 10Gb Ethernet product.

These accomplishments draw from his ability to engage and acquire customers at all stages of product maturity including partners when necessary.

60: GreyBeards talk cloud data services with Eiki Hrafnsson, Technical Director, NetApp

Sponsored by:In this episode, we talk with Eiki Hraffnsson (@Eirikurh), Technical Director, NetApp Cloud Data Services.  Eiki gave a great talk at Cloud Field Day 3 (CFD3), although neither Howard nor I were in attendance. I just met Eiki at a NetApp Spring Analyst event earlier this month and after that Howard and I had a chance to talk with him about what’s new in NetApp Cloud Data Services

This is the fourth time NetApp has been on our show (see our podcast with Lee Caswell and Dave Wright,  podcast with Andy Banta, & last month’s sponsored podcast with Adam Carter) and this is their second sponsored podcast.

Eiki came from a company NetApp acquired last year called GreenQloud whose product was QStack. Since then, QStack has become an integral part of their Cloud Data Services.

NetApp has a number of solutions under their Cloud Data Services umbrella and his area of specialty is NetApp Cloud Data Volumes, soon to be available in the MarketPlace on AWS, already in public preview an Microsoft Azure Enterprise NFS and as of 7 May 2018, in private preview as NetApp Cloud Volumes for Google Cloud Platform.

NetApp Cloud Data Volumes

NetApp’s Cloud Data Volume is a public cloud based, storage-as-a-service that supplies enterprise class NFS and SMB (CIFS) storage on a pay as you go model for major public cloud providers. That way your compute instances can have access to predictable performance, highly available file storage in the  cloud.

One advantage that Cloud Data Volumes adds to the public cloud is performance SLAs. That is customers can purchase Low, Medium and High performance file storage. Eiki said they measured Cloud Data Volume IO performance and it achieved almost 10X the public cloud normal (file) storage performance. I assume this was HIGH performing Cloud Data Volume storage, and no information on which storage type was used as the cloud alternative.

Cloud Data Volume customers also get access to NetApp Snapshot services which can create, space efficient, quick read-only copies of their cloud file storage. Cloud Data Volume storage can be purchased on a $/GB/month basis. Other  purchase options are also available for customers who prefer a pre billed amount rather than a consumptive model.

Behind the scenes, Cloud Data Volumes is actually NetApp ONTAP storage. They won’t say what kind or how much, but they do say that NetApp storage is located in public cloud data centers and is fully managed by NetApp.

Customers can use the public cloud native services portal to purchase Cloud Data Volume storage (for Microsoft Azure and GCP) or the NetApp Cloud web portal (for AWS). Once purchased, customers can use an extensive set of native cloud APIs to provision, access and tear-down Cloud Volume storage.

Other NetApp Cloud Data Services

Eiki mentioned that Cloud Data Volumes is just one of many offerings from NetApp’s Cloud Data Services business unit, including:

  • NetApp Private Storage– colocated NetApp storage owned by customers that is adjacent to public clouds.
  • ONTAP Cloud – software defined ONTAP storage system that run in the cloud on compute services using cloud storage to provide block storage.
  • Cloud Sync – data synchronization as a service offering used to replicate data from onprem NAS and object storage to the public cloud.

Probably a few others I am missing here and my bet is more offerings are on the way.

Another item Eiki mentioned with the open source,  NetApp Trident Plugin (GitHub repo). Containers are starting to need persistent state information and this means they need access to storage.

Trident provides dynamic, API driven provisioning of storage volumes for containers under Kubernetes.  Container developers define environmental characteristics which dictate operational environment and now with Trident, can also specify needed storage volumes. That way, when Kubernetes fires up a container for execution, NetApp storage is provisioned just-in-time to support container stateful execution.

The podcast runs ~25 minutes. Eiki was very knowledgeable and was easy to talk with especially on cloud technologies and how NetApp fits in.  Listen to the podcast to learn more.

Erikur (Eiki) Hrafnsson, Technical Director, NetApp Cloud Data Services

Erikur (Eiki) Hrafnsson is an entrepreneur, dad, singer. founder of GreenQloud and maker of QStack, the hybrid cloud platform, now part of NetApp Cloud Data Services. Eiki brings deep public cloud integration knowledge and broad experience in cloud automation and APIs.

59: GreyBeards talk IT trends with Marc Farley, Sr. Product Manager, HPE

In Episode 59,  we talk with Marc Farley, Senior Product Manager at HPE discussing trends in the storage industry today. Marc been on our show before (GreyBeards talk Cloud Storage…, GreyBeards video discussing file analytics, Greybeards talk cars, storage and IT…) and has been a longtime friend and associate of both Howard and I.

Marc’s been at HPE for a while now but couldn’t discuss publicly what he is working on, so we spent time discussing industry trends rather than HPE products.

We discussed the public cloud and its impact on enterprise IT. Although the cloud has been arguable alive and well for almost a decade now, its impact is still being felt today and for the foreseeable future

We next discussed AI and data storage. HPE’s acquisition of Nimble brought InfoSight into their product family, which was arguably one of the first to use big data analytics to improve field support and ongoing operations.

Howard mentioned that a logical next step is to apply AI to storage performance. Using AI to fingerprint application workloads and thereby help determine when that app’s data was needed in cache. We also mentioned that AI could be better used to help workload optimization/orchestration, in almost real time, rather than after the fact.

We talked about containerization as the next big thing. Howard and Marc said sometimes it’s less risky to just keep chugging away with what IT has always done rather than risking a move to a new paradigm/platform AKA containers. As further evidence, Marc had seen a survey (by an unnamed research firm) of customers pre-purchase expectations for new storage and what they actually used it for post-purchase. Pre-purchase, customers expected to use storage for server virtualization but post-purchase, a majority used it for more traditional, non-virtualized applications.

We returned to a perennial theme, when will SSDs supplant disk. Howard talked about a recent vendors introduction of a dual head disk and which he thought was  overreach. But all agreed the key metric is $/GB and getting the difference between rotating media and SSD $/GB below 10X. Howard believes when it’s more like 4X, then SSDs will kill off disk technology. Although some of us felt disks would never completely go away, witness tape.

The podcast runs ~38 minutes. Marc’s always a gas to talk with and is currently the most frequent guest we have had on our show  (although Jim Handy was tied with him up until now). Its’ great to hear from him again.  Listen to the podcast to learn more.

Marc Farley, Senior Product Manger, HPE

Marc is a storage greybeard who has worked for many storage companies and is currently providing product strategy for HPE. He has written three books on storage including his most recent, Rethinking Enterprise Storage: A Hybrid Cloud Model and his previous books Building Storage Networksand Storage Networking Fundamentals.

In addition to his writing books he has been a blogger and podcaster about storage topics while working for EqualLogic, Dell, 3PAR, HP, StorSimple,  Microsoft, HPE and others.

When he is not working, Marc likes to ride bicycles, listen to music, spend time with his family and dote on his cats. Of course there’s that car video curation…

57: GreyBeards talk midrange storage with Pierluca Chiodelli, VP of Prod. Mgmt. & Cust. Ops., Dell EMC Midrange Storage

Sponsored by:

Dell EMC Midrange Storage

In this episode we talk with Pierluca Chiodelli  (@chiodp), Vice President of Product, Management and Customer Experience at Dell EMC Midrange storage.  Howard talked with Pierluca at SFD14 and I talked with Pierluca at SFD13. He started working there as a customer engineer and has worked his way up to VP since then.

This is the second time (Dell) EMC has been on our show (see our EMCWorld2015 summary podcast with Chad Sakac) but this is the first sponsored podcast from Dell EMC. Pierluca seems to have been with (Dell) EMC forever.

You may recall that Dell EMC has two product families in their midrange storage portfolio. Pierluca provides a number of reasons why both continue to be invested in, enhanced and sold on the market today.

Dell EMC Unity and SC product lines

Dell EMC Unity storage is the outgrowth of unified block and file storage that was first released in the EMC VNXe series storage systems. Unity continues that tradition of providing both file and block storage in a dense, 2 rack U system configuration, with dual controllers, high availability, AFA and hybrid storage systems. The other characteristic of Unity storage is its tight integration with VMware virtualization environments.

Dell EMC SC series storage continues the long tradition of Dell Compellent storage systems, which support block storage and which invented data progression technology.  Data progression is storage tiering on steroids, with support for multi-tiered rotating disk (across the same drive), flash, and now cloud storage. SC series is also considered a set it and forget it storage system that just takes care of itself without the need for operator/admin tuning or extensive monitoring.

Dell EMC is bringing together both of these storage systems in their CloudIQ, cloud based, storage analytics engine and plan to have both systems supported under the Unisphere management engine.

Also Unity storage can tier files to the cloud and copy LUN snapshots to the public cloud using their Cloud Tiering Appliance software.  With their UnityVSA Software Defined Storage appliance and VMware vSphere running in AWS, the file and snapshot data can then be accessed in the cloud. SC Series storage will have similar capabilities, available soon.

At the end of the podcast, Pierluca talks about Dell EMC’s recently introduced Customer Loyalty Programs, which include: Never Worry Data Migrations, Built-in VirtuSteram Storage Cloud, 4:1 Storage Efficiency Guarantee, All-inclusive Software pricing, 3-year Satisfaction Guarantee, Hardware Investment Protection, and Predictable Support Pricing.

The podcast runs ~27 minutes. Pierluca is a very knowledgeable individual and although he has a beard, it’s not grey (yet). He’s been with EMC storage forever and has a long, extensive history in midrange storage, especially with Dell EMC’s storage product families. It’s been a pleasure for Howard and I to talk with him again.  Listen to the podcast to learn more.

Pierluca Chiodelli, V.P. of Product Management & Customer Operations, Dell EMC Midrange Storage

Pierluca Chiodelli is currently the Vice President of Product Management for Dell EMC’s suite of Mid-Range solutions including, Unity, VNX, and VNXe from heritage EMC storage and Compellent, EqualLogic, and Windows Storage Server from heritage Dell Storage.

Pierluca’s organization is comprised of four teams: Product Strategy, Performance & Competitive Engineering, Solutions, and Core & Strategic Account engineering. The teams are responsible for ensuring Dell EMC’s mid-range solutions enable end users and service providers to transform their operations and deliver information technology as a service.

Pierluca has been with EMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation.

Pierluca holds one degree in Chemical Engineering and second one in Information Technology.

 

56: GreyBeards talk high performance file storage with Liran Zvibel, CEO & Co-Founder, WekaIO

This month we talk high performance, cluster file systems with Liran Zvibel (@liranzvibel), CEO and Co-Founder of WekaIO, a new software defined, scale-out file system. I first heard of WekaIO when it showed up on SPEC sfs2014 with a new SWBUILD benchmark submission. They had a 60 node EC2-AWS cluster running the benchmark and achieved, at the time, the highest SWBUILD number (500) of any solution.

At the moment, WekaIO are targeting HPC and Media&Entertainment verticals for their solution and it is sold on an annual capacity subscription basis.

By the way, a Wekabyte is 2**100 bytes of storage or ~ 1 trillion exabytes (2**60).

High performance file storage

The challenges with HPC file systems is that they need to handle a large number of files, large amounts of storage with high throughput access to all this data. Where WekaIO comes into the picture is that they do all that plus can support high file IOPS. That is, they can open, read or write a high number of relatively small files at an impressive speed, with low latency. These are becoming more popular with AI-machine learning and life sciences/genomic microscopy image processing.

Most file system developers will tell you that, they can supply high throughput  OR high file IOPS but doing both is a real challenge. WekaIO’s is able to do both while at the same time supporting billions of files per directory and trillions of files in a file system.

WekaIO has support for up to 64K cluster nodes and have tested up to 4000 cluster nodes. WekaIO announced last year an OEM agreement with HPE and are starting to build out bigger clusters.

Media & Entertainment file storage requirements are mostly just high throughput with large (media) file sizes. Here WekaIO has a more competition from other cluster file systems but their ability to support extra-large data repositories with great throughput is another advantage here.

WekaIO cluster file system

WekaIO is a software defined  storage solution. And whereas many HPC cluster file systems have metadata and storage nodes. WekaIO’s cluster nodes are combined meta-data and storage nodes. So as one scale’s capacity (by adding nodes), one not only scales large file throughput (via more IO parallelism) but also scales small file IOPS (via more metadata processing capabilities). There’s also some secret sauce to their metadata sharding (if that’s the right word) that allows WekaIO to support more metadata activity as the cluster grows.

One secret to WekaIO’s ability to support both high throughput and high file IOPS lies in  their performance load balancing across the cluster. Apparently, WekaIO can be configured to constantly monitoring all cluster nodes for performance and can balance all file IO activity (data transfers and metadata services) across the cluster, to insure that no one  node is over burdened with IO.

Liran says that performance load balancing was one reason they were so successful with their EC2 AWS SPEC sfs2014 SWBUILD benchmark. One problem with AWS EC2 nodes is a lot of unpredictability in node performance. When running EC2 instances, “noisy neighbors” impact node performance.  With WekaIO’s performance load balancing running on AWS EC2 node instances, they can  just redirect IO activity around slower nodes to faster nodes that can handle the work, in real time.

WekaIO performance load balancing is a configurable option. The other alternative is for WekaIO to “cryptographically” spread the workload across all the nodes in a cluster.

WekaIO uses a host driver for Posix access to the cluster. WekaIO’s frontend also natively supports (without host driver) NFSv3, SMB3.1, HDFS and AWS S3  protocols.

WekaIO also offers configurable file system data protection that can span 100s of failure domains (racks) supporting from 4 to 16 data stripes with 2 to 4 parity stripes. Liran said this was erasure code like but wouldn’t specifically state what they are doing differently.

They also support high performance storage and inactive storage with automated tiering of inactive data to object storage through policy management.

WekaIO creates a global name space across the cluster, which can be sub-divided into one to thousands  of file systems.

Snapshoting, cloning & moving work

WekaIO also has file system snapshots (readonly) and clones (read-write) using re-direct on write methodology. After the first snapshot/clone, subsequent snapshots/clones are only differential copies.

Another feature Howard and I thought was interesting was their DR as a Service like capability. This is, using an onprem WekaIO cluster to clone a file system/directory, tiering that to an S3 storage object. Then using that S3 storage object with an AWS EC2 WekaIO cluster to import the object(s) and re-constituting that file system/directory in the cloud. Once on AWS, work can occur in the cloud and the process can be reversed to move any updates back to the onprem cluster.

This way if you had work needing more compute than available onprem, you could move the data and workload to AWS, do the work there and then move the data back down to onprem again.

WekaIO’s RtOS, network stack, & NVMeoF

WekaIO runs under Linux as a user space application. WekaIO has implemented their own  Realtime O/S (RtOS) and high performance network stack that runs in user space.

With their own network stack they have also implemented NVMeoF support for (non-RDMA) Ethernet as well as InfiniBand networks. This is probably another reason they can have such low latency file IO operations.

The podcast runs ~42 minutes. Linar has been around  data storage systems for 20 years and as a result was very knowledgeable and interesting to talk with. Liran almost qualifies as a Greybeard, if not for the fact that he was clean shaven ;/. Listen to the podcast to learn more.

Linar Zvibel, CEO and Co-Founder, WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design and development for a portfolio of rich social media applications.

 

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.