65: GreyBeards talk new FlashSystem storage with Eric Herzog, CMO and VP WW Channels IBM Storage

Sponsored by:

In this episode, we talk with Eric Herzog, Chief Marketing Officer and VP of WorldWide Channels for IBM Storage about the FlashSystem 9100 storage series.  This is the 2nd time we have had Eric on the show (see Violin podcast) and the 2nd time we have had a guest from IBM on our show (see CryptoCurrency talk). However, it’s the first time we have had IBM as a sponsor for a podcast.

Eric’s a 32 year storage industry veteran who’s worked for many major storage companies, including Seagate, EMC and IBM and 7 startups over his carreer. He’s been predominantly in marketing but was CFO at one company.

New IBM FlashSystem 9100

IBM is introducing a new FlashSystem 9100 storage series, using new NVMe FlashCore Modules (FCM) that have been re-designed to fit a small form factor (SFF, 2.5″) drive slot but also supports standard, NVMe SFF SSDs in a 2U appliance package. The new storage has dual active-active RAID controllers running the latest generation IBM Spectrum Virtualize software that’s running over 100K storage systems in the field today.

FlashSystem 9100 supports up to 24 NVMe FCMs or SSDs, which can be intermixed. The FCMs offer up to 19.2TB of usable flash and have onboard hardware compression and encryption.

With FCM media, the FlashSystem 9100 can sustain 2.5M IOPS at 100µsec response times with 34GB/sec of data throughput. Spectrum Virtualize is a clustered storage system, so one could cluster together up to 4 FlashSystem 9100s into a single storage system and support 10M IOPS and 136GB/sec of throughput.

Spectrum Virtualize just introduced block data deduplication within a data reduction pool. With thin provisioning, data deduplication, pattern matching, SCSI Unmap support, and data compression, the FlashSystem 9100 can offer up to 5:1 effective capacity:useable flash capacity. That means with 24 19.2TB FCMs, a single FlashSystem 9100 offers over 2PB of effective capacity.

In addition to the appliances 24 NVMe FCMs or NVMe SSDS, FlashSystem 9100 storage can also attach up to 20 SAS SSD drive shelves for additional capacity. Moreover, Spectrum Virtualize offers storage virtualization, so customers can attach external storage arrays behind a FlashSystem 9100 solution.

With FlashSystem 9100, IBM has bundled additional Spectrum software, including

  • Spectrum Virtualize for Public Cloud – which allows customers to migrate  data and workloads from on premises to the cloud and back again. Today this only works for IBM Cloud, but plans are to support other public clouds soon.
  • Spectrum Copy Data Management – which offers a simple way to create and manage copies of data while enabling controlled self-service for test/dev and other users to use snapshots for secondary use cases.
  • Spectrum Protect Plus – which provides data backup and recovery for FlashSystem 9100 storage, tailor made for smaller, virtualized data centers.
  • Spectrum Connect – which allows Docker and Kubernetes container apps to access persistent storage on FlashSystem 9100.

To learn more about the IBM FlashSystem 9100, join the virtual launch experience July 24, 2018 here.

The podcast runs ~43 minutes. Eric has always been knowledgeable on the enterprise storage market, past, present and future. He had a lot to talk about on the FlashSystem 9100 and seems to have mellowed lately. His grey mustache is forcing the GreyBeards to consider a name change – GreyHairsOnStorage anyone,  Listen to the podcast to learn more.

Eric Herzog, Chief Marketing Officer and VP of Worldwide Channels for IBM Storage

Eric’s responsibilities include worldwide product marketing and management for IBM’s award-winning family of storage solutions, software defined storage, integrated infrastructure, and software defined computing, as well as responsibility for global storage channels.

Herzog has over 32 years of product management, marketing, business development, alliances, sales, and channels experience in the storage software, storage systems, and storage solutions markets, managing all aspects of marketing, product management, sales, alliances, channels, and business development in both Fortune 500 and start-up storage companies.

Prior to joining IBM, Herzog was Chief Marketing Officer and Senior Vice President of Alliances for all-flash storage provider Violin Memory. Herzog was also Senior Vice President of Product Management and Product Marketing for EMC’s Enterprise & Mid-range Systems Division, where he held global responsibility for product management, product marketing, evangelism, solutions marketing, communications, and technical marketing with a P&L over $10B. Before joining EMC, he was vice president of marketing and sales at Tarmin Technologies. Herzog has also held vice president business line management and vice president of marketing positions at IBM’s Storage Technology Division, where he had P&L responsibility for the over $300M OEM RAID and storage subsystems business, and Maxtor (acquired by Seagate).

Herzog has held vice president positions in marketing, sales, operations, and acting-CFO roles at Asempra (acquired by BakBone Software), ArioData Networks (acquired by Xyratex), Topio (acquired by Network Appliance), Zambeel, and Streamlogic.

Herzog holds a B.A. degree in history from the University of California, Davis, where he graduated cum laude, studied towards a M.A. degree in Chinese history, and was a member of the Phi Alpha Theta honor society.

62: GreyBeards talk NVMeoF storage with VR Satish, Founder & CTO Pavilion Data Systems

In this episode,  we continue on our NVMeoF track by talking with VR Satish (@satish_vr), Founder and CTO of Pavilion Data Systems (@PavilionData). Howard had talked with Pavilion Data over the last year or so and I just had a briefing with them over the past week.

Pavilion data is taking a different tack to NVMeoF, innovating in software and hardware design, but using merchant silicon for their NVMeoF accelerated array solution. They offer Ethernet based NVMeoF block storage.

VR is a storage “lifer“, having worked at Veritas on their Volume Manager and other products for a long time. Moreover, Pavilion Data has a number of exec’s from Pure Storage (including their CEO, Gurpreet Singh), other storage technology companies and is located in San Jose, CA.

VR says there were 5 overriding principles for Pavilion Data as they were considering a new storage architecture:

  1. The IT industry is moving to rack scale compute and hence, there is a need for rack scale storage.
  2. Great merchant silicon was coming online so, there was less of a need to design their own silicon/asics/FPGAs.
  3. Rack scale storage needs to provide “local” (within the rack) resiliency/high availability and let modern applications manage “global” (outside the rack) resiliency/HA.
  4. Rack scale storage needs to support advanced data management services.
  5. Rack scale storage has to be easy to deploy and run

Pavilion Data’s key insight was in order to meet all those principles and deal with high performance NVMe flash and up and coming, SCM SSDs,  storage had to be redesigned to look more like network switches.

Controller cards?

One can see this new networking approach in their bottom of rack, 4U storage appliance. Their appliance has up to 20 controller cards creating a heavy compute/high bandwidth cluster attached via an internal PCIe switch to a backend storage complex made up of up to 72 U.2 NVMe SSDs.

The SSDs fit into an interposer that plugs into their PCIe switch and maps single (or dual ported) SSDs to the applianece’s PCIe bus. Each controller card supports an Intel  XeonD micrprocessor and 2 100GbE ports for up to 40 100GbE ports per appliance. The controller cards are configured in an active-active, auto-failover mode, for high availability. They don’t use memory caching or have any NVram.

On their website Pavilion data show  117 µsec response times and 114 GB/sec of throughput for IO performance.

Data management for NVMeoF storage

Pavilion Data storage supports widely striped/RAID6 data protection (16+2), thin provisioning, space efficient read only (redirect on write) snapshots and space efficient read-write clones. With RAID6, it takes more than 2  drive failures to lose data.

Like traditional storage, volumes (NVMe namespaces) are assigned to RAID groups.  The backend layout appears to be a log structured file. VR mentioned that they don’t do garbage collection and with no Nvram/no memory caching, there’s a bit of secret sauce here.

Pavilion Data storage offers two NVMeoF/Ethernet protocols:

  • Standard off the shelf,  NVMeoF/RoCE interface that makes use of v1.x of the Linux kernel NVMeoF/RoCE drivers and special NIC/switch hardware
  • New NVMeof/TCP interface that doesn’t need special networking  hardware and as such, offers NVMeoF over standard NIC/switches. I assume this takes host software to work.

In addition, Pavilion Data has developed their own Multi-path IO (MPIO) driver for NVMeoF high availability which they have contributed to the current Linux kernel project.

Their management software uses RESTful APIs (documented on their website). They also offer a CLI and GUI, both built using these APIs.  Bottom of rack storage appliances are managed as separate storage units, so they don’t support clusters of more than one appliance. However, there are only a few cluster storage systems we know of that support 20 controllers today for block storage.

Market

VR mentioned that they are going after new applications like MongoDB, Cassandra, CouchBase, etc. These applications are designed around rack scaling and provide “global”, off-rack/cross datacenter availability themselves. But VR also mentioned Oracle and other, more traditional applications. Pavilion Data storage is sold on a ($/GB) capacity basis.

The system comes in a minimum, 5 controller cards-18 NVMe SSD configuration and can be extended in groups of 5 controllers-18 NVMe SSDs to the full 20 controller cards-72 NVMe SSDs.

The podcast runs ~42 minutes. VR was very knowledgeable about the storage industry, NVMeoF storage protocols, NVMe SSDs and advanced data management capabilities. We had a good talk with VR on what Pavilion Data does and how well it works.   Listen to the podcast to learn more.

VR Satish, Founder and CTO, Pavilion Data Systems

VR Satish is the Chief Technology Officer at Pavilion Data Systems and brings more than 20 years of experience in enterprise storage software products.

Prior to joining Pavilion Data, he was an Entrepreneur-in-Residence at Artiman Ventures. Satish was an early employee of Veritas and later served as the Vice President and the Chief Technology Officer for the Information & Availability Group at Symantec Corporation prior to joining Artiman.

His current areas of interest include distributed computing, information-centric storage architectures and virtualization.

Satish holds multiple patents in storage management, and earned his Master’s degree in computer science from the University of Florida.

61: GreyBeards talk composable storage infrastructure with Taufik Ma, CEO, Attala Systems

In this episode,  we talk with Taufik Ma, CEO, Attala Systems (@AttalaSystems). Howard had met Taufik at last year’s FlashMemorySummit (FMS17) and was intrigued by their architecture which he thought was a harbinger of future trends in storage. The fact that Attala Systems was innovating with new, proprietary hardware made an interesting discussion, in its own right, from my perspective.

Taufik’s worked at startups and major hardware vendors in his past life and seems to have always been at the intersection of breakthrough solutions using hardware technology.

Attala Systems is based out of San Jose, CA.  Taufik has a class A team of executives, engineers and advisors making history again, this time in storage with JBoFs and NVMeoF.

Ray’s written about JBoF (just a bunch of flash) before (see  FaceBook moving to JBoF post). This is essentially a hardware box, filled with lots of flash storage and drive interfaces that directly connects to servers. Attala Systems storage is JBOF on steroids.

Composable Storage Infrastructure™

Essentially, their composable storage infrastructure JBOF connects with NVMeoF (NVMe over Fabric) using Ethernet to provide direct host access to  NVMe SSDs. They have implemented special purpose, proprietary hardware in the form of an FPGA, using this in a proprietary host network adapter (HNA) to support their NVMeoF storage.

Their HNA has a host side and a storage side version, both utilizing Attala Systems proprietary FPGA(s). With Attala HNAs they have implemented their own NVMeoF over UDP stack in hardware. It supports multi-path IO and highly available dual- or single-ported, NVMe SSDs in a storage shelf. They use standard RDMA capable Ethernet 25-50-100GbE (read Mellanox) switches to connect hosts to storage JBoFs.

They also support RDMA over Converged Ethernet (RoCE) NICS for additional host access. However I believe this requires host (NVMeoF) (their NVMeoY over UDP stack) software to connect to their storage.

From the host, Attala Systems storage on HNAs, looks like directly attached NVMe SSDs. Only they’re hot pluggable and physically located across an Ethernet network. In fact, Taufik mentioned that they already support VMware vSphere servers accessing Attala Systems composable storage infrastructure.

Okay on to the good stuff. Taufik said they measured their overhead and it was able to perform an IO with only an additional 5 µsec of overhead over native NVMe SSD latencies. Current NVMe SSDs operate with a response time of from 90 to 100 µsecs, and with Attala Systems Composable Storage Infrastructure, this means you should see 95 to 105 µsec response times over a JBoF(s) full of NVMe SSDs! Taufik said with Intel Optane SSD’s 10 µsec response times, they see response times at ~16 µsec (the extra µsec seems to be network switch delay)!!

Managing composable storage infrastructure

They also use a management “entity” (running on a server or as a VM),  that’s used to manage their JBoF storage and configure NVMe Namespaces (like a SCSI LUN/Volume).  Hosts use NVMe NameSpaces to access and split out the JBoF  NVMe storage space. That is, multiple Attala Systems Namespaces can be configured over a single NVMe SSD, each one corresponding to a single  (virtual to real) host NVMe SSD.

The management entity has a GUI but it just uses their RESTful APIs. They also support QoS on an IOPs or bandwidth limiting basis for Namespaces, to control manage noisy neighbors.

Attala systems architected their management system to support scale out storage. This means they could support many JBoFs in a rack and possibly multiple racks of JBoFs connected to swarms of servers. And nothing was said that would limit the number of Attala storage system JBoFs attached to a single server or under a single (dual for HA) management  entity. I thought the software may have a problem with this (e.g., 256 NVMe (NameSpaces) SSDs PCIe connected to the same server) but Taufik said this isn’t a problem for modern OS.

Taufik mentioned that with their RESTful APIs,  namespaces can be quickly created and torn down, on the fly. They envision their composable storage infrastructure to be a great complement to cloud compute and container execution environments.

For storage hardware, they use storage shelfs from OEM vendors. One recent configuration from Supermicro has hot-pluggable, dual ported, 32 NVMe slots in a 1U chasis, which at todays ~16TB capacities, is ~1/2PB of raw flash. Taufik mentioned 32TB NVMe SSDs are being worked on as we speak. Imagine that 1PB of flash NVMe SSD storage in 1U!!

The podcast runs ~47 minutes. Taufik took a while to get warmed up but once he got going, my jaw dropped away.  Listen to the podcast to learn more.

Taufik Ma, CEO Attala Systems

Tech-savvy business executive with track record of commercializing disruptive data center technologies.  After a short stint as an engineer at Intel after college, Taufik jumped to the business side where he led a team to define Intel’s crown jewels – CPUs & chipsets – during the ascendancy of the x86 server platform.

He honed his business skills as Co-GM of Intel’s Server System BU before leaving for a storage/networking startup.  The acquisition of this startup put him into the executive team of Emulex where as SVP of product management, he grew their networking business from scratch to deliver the industry’s first million units of 10Gb Ethernet product.

These accomplishments draw from his ability to engage and acquire customers at all stages of product maturity including partners when necessary.

58: GreyBeards talk HCI with Adam Carter, Chief Architect NetApp Solidfire #NetAppHCI

Sponsored by:In this episode we talk with Adam Carter (@yoadamcarter), Chief Architect, NetApp Solidfire & HCI (Hyper Converged Infrastructure) solutions. Howard talked with Adam at TFD16 and I have known Adam since before the acquisition. Adam did a tour de force session on HCI architectures at TFD16 and we would encourage you to view the video’s of his session.

This is the third time NetApp has been on our show (see our podcast with Lee Caswell and Dave Wright and our podcast with Andy Banta) but this is the first sponsored podcast from NetApp. Adam has been Chief Architect for Solidfire for as long as I have known him.

NetApp has FAS/AFF series storage, E-Series storage and SolidFire storage. Their HCI solution is based on their SolidFire storage system.

NetApp SolidFire HCI Appliance

 

NetApp’s HCI solution is built around a 2U 4-server configuration where 3 of the nodes are actual denser, new SolidFire storage nodes and the 4th node is a VMware ESXi host. That is they have a real, fully functional SolidFile AFA SAN storage system built directly into their HCI solution.

There’s probably a case to be made that this isn’t technically a HCI system from an industry perspective and looks more like a well architected, CI  (converged infrastructure) solution. However, they do support VMs running on their system, its all packaged together as one complete system, and they offer end-to-end (one throat to choke) support, over the complete system.

In addition, they spent a lot of effort improving SolidFire’s, already great VMware software integration to offer a full management stack that fully supports both the vSphere environment and the  embedded SolidFire AFA SAN storage system.

Using a full SolidFire storage system in their solution, NetApp  gave up on the low-end (<$30K-$50K) portion of the HCI market. But to supply the high IO performance, multi-tenancy, and QoS services of current SolidFire storage systems, they felt they had to embed a full SAN storage system.

With other HCI solutions, the storage activity must contend with other VMs and kernel processing on the server. And in these solutions, the storage system doesn’t control CPU/core/thread allocation and as such, can’t guarantee IO service levels that SolidFire is known for.

Also, by configuring their system with a real AFA SAN system, new additional ESXi servers can be added to the complex without needing to purchase additional storage software licenses. Further, customers can add bare metal servers to this environment and there’s still plenty of IO performance to go around. On the other hand, if a customer truly needs more storage performance/capacity, they can always add an additional, standalone SolidFire storage node to the cluster.

The podcast runs ~23 minutes. Adam was very easy to talk with and had deep technical knowledge of their new solution, industry HCI solutions and SolidFire storage.  It’s was a great pleasure for Howard and I to talk with him again. Listen to the podcast to learn more.

Adam Carter, Chief Architect, NetApp SolidFire

Adam Carter is the Chief Product Architect for SolidFire and HCI at NetApp. Adam is an expert in next generation data center infrastructure and storage virtualization.

Adam has led product management at LeftHand Networks, HP, VMware, SolidFire, and NetApp bringing revolutionary products to market. Adam pioneered the industry’s first Virtual Storage Appliance (VSA) product at LeftHand Networks and helped establish VMware’s VSA certification category.

Adam brings deep product knowledge and broad experience in the software defined data center ecosystem.

56: GreyBeards talk high performance file storage with Liran Zvibel, CEO & Co-Founder, WekaIO

This month we talk high performance, cluster file systems with Liran Zvibel (@liranzvibel), CEO and Co-Founder of WekaIO, a new software defined, scale-out file system. I first heard of WekaIO when it showed up on SPEC sfs2014 with a new SWBUILD benchmark submission. They had a 60 node EC2-AWS cluster running the benchmark and achieved, at the time, the highest SWBUILD number (500) of any solution.

At the moment, WekaIO are targeting HPC and Media&Entertainment verticals for their solution and it is sold on an annual capacity subscription basis.

By the way, a Wekabyte is 2**100 bytes of storage or ~ 1 trillion exabytes (2**60).

High performance file storage

The challenges with HPC file systems is that they need to handle a large number of files, large amounts of storage with high throughput access to all this data. Where WekaIO comes into the picture is that they do all that plus can support high file IOPS. That is, they can open, read or write a high number of relatively small files at an impressive speed, with low latency. These are becoming more popular with AI-machine learning and life sciences/genomic microscopy image processing.

Most file system developers will tell you that, they can supply high throughput  OR high file IOPS but doing both is a real challenge. WekaIO’s is able to do both while at the same time supporting billions of files per directory and trillions of files in a file system.

WekaIO has support for up to 64K cluster nodes and have tested up to 4000 cluster nodes. WekaIO announced last year an OEM agreement with HPE and are starting to build out bigger clusters.

Media & Entertainment file storage requirements are mostly just high throughput with large (media) file sizes. Here WekaIO has a more competition from other cluster file systems but their ability to support extra-large data repositories with great throughput is another advantage here.

WekaIO cluster file system

WekaIO is a software defined  storage solution. And whereas many HPC cluster file systems have metadata and storage nodes. WekaIO’s cluster nodes are combined meta-data and storage nodes. So as one scale’s capacity (by adding nodes), one not only scales large file throughput (via more IO parallelism) but also scales small file IOPS (via more metadata processing capabilities). There’s also some secret sauce to their metadata sharding (if that’s the right word) that allows WekaIO to support more metadata activity as the cluster grows.

One secret to WekaIO’s ability to support both high throughput and high file IOPS lies in  their performance load balancing across the cluster. Apparently, WekaIO can be configured to constantly monitoring all cluster nodes for performance and can balance all file IO activity (data transfers and metadata services) across the cluster, to insure that no one  node is over burdened with IO.

Liran says that performance load balancing was one reason they were so successful with their EC2 AWS SPEC sfs2014 SWBUILD benchmark. One problem with AWS EC2 nodes is a lot of unpredictability in node performance. When running EC2 instances, “noisy neighbors” impact node performance.  With WekaIO’s performance load balancing running on AWS EC2 node instances, they can  just redirect IO activity around slower nodes to faster nodes that can handle the work, in real time.

WekaIO performance load balancing is a configurable option. The other alternative is for WekaIO to “cryptographically” spread the workload across all the nodes in a cluster.

WekaIO uses a host driver for Posix access to the cluster. WekaIO’s frontend also natively supports (without host driver) NFSv3, SMB3.1, HDFS and AWS S3  protocols.

WekaIO also offers configurable file system data protection that can span 100s of failure domains (racks) supporting from 4 to 16 data stripes with 2 to 4 parity stripes. Liran said this was erasure code like but wouldn’t specifically state what they are doing differently.

They also support high performance storage and inactive storage with automated tiering of inactive data to object storage through policy management.

WekaIO creates a global name space across the cluster, which can be sub-divided into one to thousands  of file systems.

Snapshoting, cloning & moving work

WekaIO also has file system snapshots (readonly) and clones (read-write) using re-direct on write methodology. After the first snapshot/clone, subsequent snapshots/clones are only differential copies.

Another feature Howard and I thought was interesting was their DR as a Service like capability. This is, using an onprem WekaIO cluster to clone a file system/directory, tiering that to an S3 storage object. Then using that S3 storage object with an AWS EC2 WekaIO cluster to import the object(s) and re-constituting that file system/directory in the cloud. Once on AWS, work can occur in the cloud and the process can be reversed to move any updates back to the onprem cluster.

This way if you had work needing more compute than available onprem, you could move the data and workload to AWS, do the work there and then move the data back down to onprem again.

WekaIO’s RtOS, network stack, & NVMeoF

WekaIO runs under Linux as a user space application. WekaIO has implemented their own  Realtime O/S (RtOS) and high performance network stack that runs in user space.

With their own network stack they have also implemented NVMeoF support for (non-RDMA) Ethernet as well as InfiniBand networks. This is probably another reason they can have such low latency file IO operations.

The podcast runs ~42 minutes. Linar has been around  data storage systems for 20 years and as a result was very knowledgeable and interesting to talk with. Liran almost qualifies as a Greybeard, if not for the fact that he was clean shaven ;/. Listen to the podcast to learn more.

Linar Zvibel, CEO and Co-Founder, WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design and development for a portfolio of rich social media applications.

 

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.