65: GreyBeards talk new FlashSystem storage with Eric Herzog, CMO and VP WW Channels IBM Storage

Sponsored by:

In this episode, we talk with Eric Herzog, Chief Marketing Officer and VP of WorldWide Channels for IBM Storage about the FlashSystem 9100 storage series.  This is the 2nd time we have had Eric on the show (see Violin podcast) and the 2nd time we have had a guest from IBM on our show (see CryptoCurrency talk). However, it’s the first time we have had IBM as a sponsor for a podcast.

Eric’s a 32 year storage industry veteran who’s worked for many major storage companies, including Seagate, EMC and IBM and 7 startups over his carreer. He’s been predominantly in marketing but was CFO at one company.

New IBM FlashSystem 9100

IBM is introducing a new FlashSystem 9100 storage series, using new NVMe FlashCore Modules (FCM) that have been re-designed to fit a small form factor (SFF, 2.5″) drive slot but also supports standard, NVMe SFF SSDs in a 2U appliance package. The new storage has dual active-active RAID controllers running the latest generation IBM Spectrum Virtualize software that’s running over 100K storage systems in the field today.

FlashSystem 9100 supports up to 24 NVMe FCMs or SSDs, which can be intermixed. The FCMs offer up to 19.2TB of usable flash and have onboard hardware compression and encryption.

With FCM media, the FlashSystem 9100 can sustain 2.5M IOPS at 100µsec response times with 34GB/sec of data throughput. Spectrum Virtualize is a clustered storage system, so one could cluster together up to 4 FlashSystem 9100s into a single storage system and support 10M IOPS and 136GB/sec of throughput.

Spectrum Virtualize just introduced block data deduplication within a data reduction pool. With thin provisioning, data deduplication, pattern matching, SCSI Unmap support, and data compression, the FlashSystem 9100 can offer up to 5:1 effective capacity:useable flash capacity. That means with 24 19.2TB FCMs, a single FlashSystem 9100 offers over 2PB of effective capacity.

In addition to the appliances 24 NVMe FCMs or NVMe SSDS, FlashSystem 9100 storage can also attach up to 20 SAS SSD drive shelves for additional capacity. Moreover, Spectrum Virtualize offers storage virtualization, so customers can attach external storage arrays behind a FlashSystem 9100 solution.

With FlashSystem 9100, IBM has bundled additional Spectrum software, including

  • Spectrum Virtualize for Public Cloud – which allows customers to migrate  data and workloads from on premises to the cloud and back again. Today this only works for IBM Cloud, but plans are to support other public clouds soon.
  • Spectrum Copy Data Management – which offers a simple way to create and manage copies of data while enabling controlled self-service for test/dev and other users to use snapshots for secondary use cases.
  • Spectrum Protect Plus – which provides data backup and recovery for FlashSystem 9100 storage, tailor made for smaller, virtualized data centers.
  • Spectrum Connect – which allows Docker and Kubernetes container apps to access persistent storage on FlashSystem 9100.

To learn more about the IBM FlashSystem 9100, join the virtual launch experience July 24, 2018 here.

The podcast runs ~43 minutes. Eric has always been knowledgeable on the enterprise storage market, past, present and future. He had a lot to talk about on the FlashSystem 9100 and seems to have mellowed lately. His grey mustache is forcing the GreyBeards to consider a name change – GreyHairsOnStorage anyone,  Listen to the podcast to learn more.

Eric Herzog, Chief Marketing Officer and VP of Worldwide Channels for IBM Storage

Eric’s responsibilities include worldwide product marketing and management for IBM’s award-winning family of storage solutions, software defined storage, integrated infrastructure, and software defined computing, as well as responsibility for global storage channels.

Herzog has over 32 years of product management, marketing, business development, alliances, sales, and channels experience in the storage software, storage systems, and storage solutions markets, managing all aspects of marketing, product management, sales, alliances, channels, and business development in both Fortune 500 and start-up storage companies.

Prior to joining IBM, Herzog was Chief Marketing Officer and Senior Vice President of Alliances for all-flash storage provider Violin Memory. Herzog was also Senior Vice President of Product Management and Product Marketing for EMC’s Enterprise & Mid-range Systems Division, where he held global responsibility for product management, product marketing, evangelism, solutions marketing, communications, and technical marketing with a P&L over $10B. Before joining EMC, he was vice president of marketing and sales at Tarmin Technologies. Herzog has also held vice president business line management and vice president of marketing positions at IBM’s Storage Technology Division, where he had P&L responsibility for the over $300M OEM RAID and storage subsystems business, and Maxtor (acquired by Seagate).

Herzog has held vice president positions in marketing, sales, operations, and acting-CFO roles at Asempra (acquired by BakBone Software), ArioData Networks (acquired by Xyratex), Topio (acquired by Network Appliance), Zambeel, and Streamlogic.

Herzog holds a B.A. degree in history from the University of California, Davis, where he graduated cum laude, studied towards a M.A. degree in Chinese history, and was a member of the Phi Alpha Theta honor society.

57: GreyBeards talk midrange storage with Pierluca Chiodelli, VP of Prod. Mgmt. & Cust. Ops., Dell EMC Midrange Storage

Sponsored by:

Dell EMC Midrange Storage

In this episode we talk with Pierluca Chiodelli  (@chiodp), Vice President of Product, Management and Customer Experience at Dell EMC Midrange storage.  Howard talked with Pierluca at SFD14 and I talked with Pierluca at SFD13. He started working there as a customer engineer and has worked his way up to VP since then.

This is the second time (Dell) EMC has been on our show (see our EMCWorld2015 summary podcast with Chad Sakac) but this is the first sponsored podcast from Dell EMC. Pierluca seems to have been with (Dell) EMC forever.

You may recall that Dell EMC has two product families in their midrange storage portfolio. Pierluca provides a number of reasons why both continue to be invested in, enhanced and sold on the market today.

Dell EMC Unity and SC product lines

Dell EMC Unity storage is the outgrowth of unified block and file storage that was first released in the EMC VNXe series storage systems. Unity continues that tradition of providing both file and block storage in a dense, 2 rack U system configuration, with dual controllers, high availability, AFA and hybrid storage systems. The other characteristic of Unity storage is its tight integration with VMware virtualization environments.

Dell EMC SC series storage continues the long tradition of Dell Compellent storage systems, which support block storage and which invented data progression technology.  Data progression is storage tiering on steroids, with support for multi-tiered rotating disk (across the same drive), flash, and now cloud storage. SC series is also considered a set it and forget it storage system that just takes care of itself without the need for operator/admin tuning or extensive monitoring.

Dell EMC is bringing together both of these storage systems in their CloudIQ, cloud based, storage analytics engine and plan to have both systems supported under the Unisphere management engine.

Also Unity storage can tier files to the cloud and copy LUN snapshots to the public cloud using their Cloud Tiering Appliance software.  With their UnityVSA Software Defined Storage appliance and VMware vSphere running in AWS, the file and snapshot data can then be accessed in the cloud. SC Series storage will have similar capabilities, available soon.

At the end of the podcast, Pierluca talks about Dell EMC’s recently introduced Customer Loyalty Programs, which include: Never Worry Data Migrations, Built-in VirtuSteram Storage Cloud, 4:1 Storage Efficiency Guarantee, All-inclusive Software pricing, 3-year Satisfaction Guarantee, Hardware Investment Protection, and Predictable Support Pricing.

The podcast runs ~27 minutes. Pierluca is a very knowledgeable individual and although he has a beard, it’s not grey (yet). He’s been with EMC storage forever and has a long, extensive history in midrange storage, especially with Dell EMC’s storage product families. It’s been a pleasure for Howard and I to talk with him again.  Listen to the podcast to learn more.

Pierluca Chiodelli, V.P. of Product Management & Customer Operations, Dell EMC Midrange Storage

Pierluca Chiodelli is currently the Vice President of Product Management for Dell EMC’s suite of Mid-Range solutions including, Unity, VNX, and VNXe from heritage EMC storage and Compellent, EqualLogic, and Windows Storage Server from heritage Dell Storage.

Pierluca’s organization is comprised of four teams: Product Strategy, Performance & Competitive Engineering, Solutions, and Core & Strategic Account engineering. The teams are responsible for ensuring Dell EMC’s mid-range solutions enable end users and service providers to transform their operations and deliver information technology as a service.

Pierluca has been with EMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation.

Pierluca holds one degree in Chemical Engineering and second one in Information Technology.

 

56: GreyBeards talk high performance file storage with Liran Zvibel, CEO & Co-Founder, WekaIO

This month we talk high performance, cluster file systems with Liran Zvibel (@liranzvibel), CEO and Co-Founder of WekaIO, a new software defined, scale-out file system. I first heard of WekaIO when it showed up on SPEC sfs2014 with a new SWBUILD benchmark submission. They had a 60 node EC2-AWS cluster running the benchmark and achieved, at the time, the highest SWBUILD number (500) of any solution.

At the moment, WekaIO are targeting HPC and Media&Entertainment verticals for their solution and it is sold on an annual capacity subscription basis.

By the way, a Wekabyte is 2**100 bytes of storage or ~ 1 trillion exabytes (2**60).

High performance file storage

The challenges with HPC file systems is that they need to handle a large number of files, large amounts of storage with high throughput access to all this data. Where WekaIO comes into the picture is that they do all that plus can support high file IOPS. That is, they can open, read or write a high number of relatively small files at an impressive speed, with low latency. These are becoming more popular with AI-machine learning and life sciences/genomic microscopy image processing.

Most file system developers will tell you that, they can supply high throughput  OR high file IOPS but doing both is a real challenge. WekaIO’s is able to do both while at the same time supporting billions of files per directory and trillions of files in a file system.

WekaIO has support for up to 64K cluster nodes and have tested up to 4000 cluster nodes. WekaIO announced last year an OEM agreement with HPE and are starting to build out bigger clusters.

Media & Entertainment file storage requirements are mostly just high throughput with large (media) file sizes. Here WekaIO has a more competition from other cluster file systems but their ability to support extra-large data repositories with great throughput is another advantage here.

WekaIO cluster file system

WekaIO is a software defined  storage solution. And whereas many HPC cluster file systems have metadata and storage nodes. WekaIO’s cluster nodes are combined meta-data and storage nodes. So as one scale’s capacity (by adding nodes), one not only scales large file throughput (via more IO parallelism) but also scales small file IOPS (via more metadata processing capabilities). There’s also some secret sauce to their metadata sharding (if that’s the right word) that allows WekaIO to support more metadata activity as the cluster grows.

One secret to WekaIO’s ability to support both high throughput and high file IOPS lies in  their performance load balancing across the cluster. Apparently, WekaIO can be configured to constantly monitoring all cluster nodes for performance and can balance all file IO activity (data transfers and metadata services) across the cluster, to insure that no one  node is over burdened with IO.

Liran says that performance load balancing was one reason they were so successful with their EC2 AWS SPEC sfs2014 SWBUILD benchmark. One problem with AWS EC2 nodes is a lot of unpredictability in node performance. When running EC2 instances, “noisy neighbors” impact node performance.  With WekaIO’s performance load balancing running on AWS EC2 node instances, they can  just redirect IO activity around slower nodes to faster nodes that can handle the work, in real time.

WekaIO performance load balancing is a configurable option. The other alternative is for WekaIO to “cryptographically” spread the workload across all the nodes in a cluster.

WekaIO uses a host driver for Posix access to the cluster. WekaIO’s frontend also natively supports (without host driver) NFSv3, SMB3.1, HDFS and AWS S3  protocols.

WekaIO also offers configurable file system data protection that can span 100s of failure domains (racks) supporting from 4 to 16 data stripes with 2 to 4 parity stripes. Liran said this was erasure code like but wouldn’t specifically state what they are doing differently.

They also support high performance storage and inactive storage with automated tiering of inactive data to object storage through policy management.

WekaIO creates a global name space across the cluster, which can be sub-divided into one to thousands  of file systems.

Snapshoting, cloning & moving work

WekaIO also has file system snapshots (readonly) and clones (read-write) using re-direct on write methodology. After the first snapshot/clone, subsequent snapshots/clones are only differential copies.

Another feature Howard and I thought was interesting was their DR as a Service like capability. This is, using an onprem WekaIO cluster to clone a file system/directory, tiering that to an S3 storage object. Then using that S3 storage object with an AWS EC2 WekaIO cluster to import the object(s) and re-constituting that file system/directory in the cloud. Once on AWS, work can occur in the cloud and the process can be reversed to move any updates back to the onprem cluster.

This way if you had work needing more compute than available onprem, you could move the data and workload to AWS, do the work there and then move the data back down to onprem again.

WekaIO’s RtOS, network stack, & NVMeoF

WekaIO runs under Linux as a user space application. WekaIO has implemented their own  Realtime O/S (RtOS) and high performance network stack that runs in user space.

With their own network stack they have also implemented NVMeoF support for (non-RDMA) Ethernet as well as InfiniBand networks. This is probably another reason they can have such low latency file IO operations.

The podcast runs ~42 minutes. Linar has been around  data storage systems for 20 years and as a result was very knowledgeable and interesting to talk with. Liran almost qualifies as a Greybeard, if not for the fact that he was clean shaven ;/. Listen to the podcast to learn more.

Linar Zvibel, CEO and Co-Founder, WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design and development for a portfolio of rich social media applications.

 

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

54: GreyBeards talk scale-out secondary storage with Jonathan Howard, Dir. Tech. Alliances at Commvault

This month we talk scale-out secondary storage with Jonathan Howard,  Director of Technical Alliances at Commvault.  Both Howard and I attended Commvault GO2017 for Tech Field Day, this past month in Washington DC. We had an interesting overview of their Hyperscale secondary storage solution and Jonathan was the one answering most of our questions, so we thought he would make an good guest for our podcast.

Commvault has been providing data protection solutions for a long time, using anyone’s secondary storag, but recently they have released a software defined, scale-out secondary storage solution that runs their software with a clustered file system.

Hyperscale secondary storage

They call their solution, Hyperscale secondary storage and it’s available in both an hardware-software appliance as well as software only configuration on compatible off the shelf commercial hardware. Hyperscale uses the Red Hat Gluster cluster file system and together with the Commvault Data Platform provides a highly scaleable, secondary storage cluster that can meet anyone’s secondary storage needs while providing high availability and high throughput performance.

Commvault’s Hyperscale secondary storage system operates onprem in customer data centers. Hyperscale uses flash storage for system metadata but most secondary storage resides on local server disk.

Combined with Commvault Data Platform

With the sophistication of Commvault Data Platform one can have all the capabilities of a standalone Commvault environment with software defined storage. This allows just about any RTO/RPO needed by today’s enterprise and includes Live Sync secondary storage replication,  Onprem IntelliSnap for on storage snapshot management, Live Mount for instant recovery using secondary storage directly  to boot your VMs without having to wait for data recovery.  , and all the other recovery sophistication available from Commvault.

Hyperscale storage is capable of doing up to 5 Live Mount recoveries simultaneously per node without a problem but more are possible depending on performance requirements.

We also talked about Commvault’s cloud secondary storage solution which can make use of AWS S3 storage to hold backups.

Commvault’s organic growth

Most of the other data protection companies have came about through mergers, acquisitions or spinoffs. Commvault has continued along, enhancing their solution while bashing everything on an underlying centralized metadata database.  So their codebase was grown from the bottom up and supports pretty much any and all data protection requirements.

The podcast runs ~50 minutes. Jonathan was very knowledgeable about the technology and was great to talk with. Listen to the podcast to learn more.

Jonathan Howard, Director, Technical and Engineering Alliances, Commvault

Jonathan Howard is a Director, Technology & Engineering Alliances for Commvault. A 20-year veteran of the IT industry, Jonathan has worked at Commvault for the past 8 years in various field, product management, and now alliance facing roles.

In his present role with Alliances, Jonathan works with business and technology leaders to design and create numerous joint solutions that have empowered Commvault alliance partners to create and deliver their own new customer solutions.

52: GreyBeards talk software defined storage with Kiran Sreenivasamurthy, VP Product Management, Maxta

This month we talk with an old friend from Storage Field Day 7 (videos), Kiran Sreenivasamurthy, VP of Product Management for Maxta. Maxta has a software defined storage solution which currently works on VMware vSphere, Red Hat Virtualization and KVM to supply shared, scale out storage and HCI solutions for enterprises across the world.

Maxta is similar to VMware’s vSAN software defined storage whose licenses can be transferred from one server to another, as you upgrade your data center over time. As software defined storage, Maxta runs on any standard Intel X86 hardware. Indeed, Maxta has one customer running two Super Micro servers and one Cisco server in the same cluster.

Maxta advantages

One item that makes Maxta unique is all of its storage properties are assignable at a VM granularity. That is,  replication, deduplication, compression and even blocksize can all be enabled/set at the VMDK-VM level.  This could be useful for environments supporting diverse applications, such as having a 64K block size for Microsoft Exchange and 4K block size for web servers.

Another advantage is their multi-hypervisor support. Maxta’s support for RH Virtualization, VMware and KVM offers the unique ability to migrate storage and even powered off VMs, from one hypervisor to another. Maxta’s file system is the same for both VMware and KVM clusters.

Maxta clusters

Their software must be licensed on all servers in a vSphere or KVM cluster with access to Maxta storage. The minimum Maxta cluster size is 3 nodes for 2-way replication and 5 nodes for 3-way replication.  Most Maxta systems run on 8 to 12 server node clusters. But Maxta has installations with 20 to 24 nodes in customer deployments.

Maxta supports SSD only as well as SSD-disk hybrid storage. And SSDs can be NVMe as well as SATA SSD storage. In hybrid configurations, Maxta SSDs are used as read and write back caches for disk storage.

Maxta supports compute only nodes, compute-storage nodes and witness only nodes (node with 1 storage device). In addition, besides heterogeneous server support, Maxta clusters can have nodes with different storage capacities. Maxta will optimize VM data placement to balance IO activity across heterogeneous nodes.

Maxta provides a vCenter plugin so VMware admins can manage and monitor their storage inside vSphere environment. Maxta also offers a Cloud Connect MX which is a cloud based system allowing for management of all your Maxta clusters through out an enterprise, wherever they reside.

Even HCI, through partners

For customers wanting an HCI solution, Maxta partners can supply pre-tested, HCI appliances or can configure Maxta software with servers at customer data centers. Maxta has done well OEMing their solution, and one significant success has been their OEM deal with Lenovo in China and East Asia, where they sell HCI appliances with Maxta software.

Maxta has also found success with managed service providers (that want to deploy the software on their own hardware), and SME & ROBO environments. Also Maxta seems to be doing very well in Latin America as well as previously mentioned China.

The podcast runs ~42 minutes. Kiran is knowledgeable individual and has worked with some of the leading storage companies of the last two decades.  Listen to the podcast to learn more.

Kiran Sreenivasamurthy, VP Product Management, Maxta

Kiran Sreenivasamurthy is the Vice President of Product Management for Maxta Inc. He has developed and managed storage hardware and software products for more than 20 years with leading storage companies and startups including HP 3PAR, NetApp and Mendocino Software.

Kiran Manages all aspects of Maxta’s hyperconvergence product portfolio from inception through revenue.