73: GreyBeards talk HCI with Gabriel Chapman, Sr. Mgr. Cloud Infrastructure NetApp

Sponsored by: NetApp

In this episode we talk HCI  with Gabriel Chapman (@Bacon_Is_King), Senior Manager, Cloud Infrastructure, NetApp. Gabriel presented at the NetApp Insight 2018 TechFieldDay Extra (TFDx) event (video available here). Gabriel also presented last year at the VMworld 2017 TFDx event (video available here). If you get a chance we encourage you to watch the videos as Gabriel, did a great job providing some design intent and descriptions of NetApp HCI capabilities. Our podcast was recorded after the TFDx event.

NetApp HCI consists of NetApp Solidfire storage re-configured, as a small enterprise class AFA storage node occupying one blade of a four blade system, where the other three blades are dedicated compute servers. NetApp HCI runs VMware vSphere but uses enterprise class iSCSI storage supplied by the NetApp SolidFire AFA.

On our podcast, we talked a bit about SolidFire storage. It’s not well known but the 1st few releases of SolidFire (before NetApp acquisition) didn’t have a GUI and was entirely dependent on its API/CLI for operations. That heritage continues today as NetApp HCI management console is basically a front end GUI for NetApp HCI API calls.

Another advantage of SolidFire storage was it’s extensive QoS support which included state of the art service credits as well as service limits.  All that QoS sophistication is also available in NetApp HCI, so that customers can more effectively limit noisy neighbor interference on HCI storage.

Although NetApp HCI runs VMware vSphere as its preferred hypervisor, it’s also possible to run other hypervisors in bare metal clusters with NetApp HCI storage and compute servers. In contrast to other HCI solutions, with NetApp HCI, customers can run different hypervisors, all at the same time, sharing access to NetApp HCI storage.

On our podcast and the Insight TFDx talk, Gabriel mentioned some future deliveries and roadmap items such as:

  • Extending NetApp HCI hardware with a new low-end, 2U configuration designed specifically for RoBo and SMB customers;.
  • Adding NetApp Cloud Volume support so that customers can extend their data fabric out to NetApp HCI; and
  • Adding (NFS) file services support so that customers using NFS data stores /VVols could take advantage of NetApp HCI storage.

Another thing we discussed was the new development HCI cadence. In the past they typically delivered new functionality about 1/year. But with the new development cycle,  they’re able to deliver functionality much faster but have settled onto a 2 releases/year cycle, which seems about as quickly as their customer base can adopt new functionality.

The podcast runs ~22 minutes. We apologize for any quality issues with the audio. It was recorded at the show and we were novices with the onsite recording technology. We promise to do better in the future. Gabriel has almost become a TFDx regular these days and provides a lot of insight on both NetApp HCI and SolidFire storage.  Listen to our podcast to learn more.

Gabriel Chapman, Senior Manager, Cloud Infrastructure, NetApp

Gabriel is the Senior Manager for NetApp HCI Go to Market. Today he is mainly engaged with NetApp’s top tier customers and partners with a primary focus on Hyper Converged Infrastructure for the Next Generation Data Center.

As a 7 time vExpert that transitioned into the vendor side after spending 15 years working in the end user Information Technology arena, Gabriel specializes in storage and virtualization technologies. Today his primary area of expertise revolves around storage, data center virtualization, hyper-converged infrastructure, rack scale/hyper scale computing, cloud, DevOps, and enterprise infrastructure design.

Gabriel is a Prime Mover, Technologist, Unapologetic Randian, Social Media Junky, Writer, Bacon Lover, and Deep Thinker, whose goal is to speak truth on technology and make complex ideas sound simple. In his free time, Gabriel is the host of the In Tech We Trust podcast and enjoys blogging as well as public speaking.

Prior to joining SolidFire, Gabriel was a storage technologies specialist covering the United States with Cisco, focused on the Global Service Provider customer base. Before Cisco, he was part of the go-to-market team at SimpliVity, where he concentrated on crafting the customer facing messaging, pre-sales engagement, and evangelism efforts for the early adopters of Hyper Converged Infrastructure.

72: GreyBeards talk Computational Storage with Scott Shadley, VP Marketing NGD Systems

For this episode the GreyBeards talked with another old friend, Scott Shadley, VP Marketing, NGD Systems. As we discussed on our FMS18 wrap up show with Jim Handy, computational storage had sort of a coming out party at the show.

NGD systems started in 2013 and have  been working towards a solution that goes general availability at the end of this year. Their computational storage SSD supplies general purpose processing power sitting inside an SSD. NGD shipped their first prototypes in 2016, shipped FPGA version of their smart SSD in 2017 and already have their field upgradable, ASIC prototypes in customer hands.

NGD’s smart SSDs have a 4-core ARM processor and  run an Ubuntu Distro on 3 of them.  Essentially, anything that could be run on Ubuntu Linux, including Docker containers and Kubernetes could be run on their smart SSDs.

NGD sells standard (storage only) SSDs as well as their smart SSDs. The smart hardware is shipped with all of their SSDs, but is only enabled after customer’s purchase a software license key. They currently offer their smart SSD solutions in  America and Europe, with APAC coming later.

They offer smart SSDs in both a 2.5” and M.2 form factor. NGD Systemss are following the flash technology road map and currently offer a 16TB SSD in 2.5” FF.

How applications work on smart SSDs

They offer an open-source, SDK which creates a TCP/IP tunnel across the  NVMe bus that attaches their smart SSD. This allows the host and the SSD server to communicate and send (RPC) work back and forth between them.

A normal smart SSD work flow could be

  1. Host server writes data onto the smart SSD;
  2. Host signals the smart SSD to perform work on the data on the smartSSD;
  3. Smart SSD processes the data that has been sent to the SSD; and
  4. When smart SSD work is done, it sends a response back to the host.

I assume somewhere before #2 above, you load application software onto the device.

All the work to be done on smart SSDs could be the same for the attached SSD and the work could easily be distributed across all attached smart SSDs attached and the host processor. For example, for image processing, a host processor would write images to be processed across all the SSDs and have each perform image recognition and append tags (or other results info) metadata onto the image and then respond back to the host. Or for media transcoding, video streams could be written to a smart SSD and have it perform transcoding completely outboard.

The smart SSD processors access the data just like the host processor or could use services available in their SDK which would access the data much faster. Just about any data processing you could do on the host processor could be done outboard, on smart SSD processor elements. Scott mentioned that memory intensive applications are probably not a good fit for computational storage.

He also said that their processing (ARM) elements were specifically designed for low power operations. So although AI training and inference processing might be much faster on GPUs, their power consumption was much higher. As a result, AI training and inference processing power-performance would be better on smart SSDs.

Markets for smart SSDs?

One target market for NGD’s computational storage SSDs is hyper scalars. At FMS18, Microsoft Research published a report on running FAISS software on NGD Smart SSDs that led to a significant speedup. Scott also brought up one company they’re working with that was testing  to find out just how many 4K video  streams can be processed on a gaggle of smart SSDs. There was also talk of three letter (gov’t) organizations interested in smart SSDs to encrypt data and perform other outboard processing of (intelligence) data.

Highly distributed applications and data reminds me of a lot of HPC customers I  know. But bandwidth is also a major concern for HPC.  NVMe is fast, but there’s a limit to how many SSDs can be attached to a server.

However, with NVMeoF, NGD Systems could support a lot more “attached”  smart SSDs. Imagine a scoop of smart SSDs, all attached to a slurp of servers,  performing data intensive applications on their processing elements in a widely distributed fashion. Sounds like HPC to me.

The podcast runs ~39 minutes. Scott’s great to talk with and is very knowledgeable about the Flash/SSD industry and NGD Systems. His talk on their computational storage was mind expanding. Listen to the podcast to learn more.

Scott Shadley, VP Marketing, NGD Systems

Scott Shadley, Storage Technologist and VP of Marketing at NGD Systems, has more than 20 years of experience with Storage and Semiconductor technology. Working at STEC he was part of the team that enabled and created the world’s first Enterprise SSDs.

He spent 17 years at Micron, most recently leading the SATA SSD product line with record-breaking revenue and growth for the company. He is active on social media, a lover of all things High Tech, enjoys educating and sharing and a self-proclaimed geek around mobile technologies.

71: GreyBeards talk DP appliances with Sharad Rastogi, Sr. VP & Ranga Rajagopalan, Sr. Dir., Dell EMC DPD

Sponsored by:

In this episode we talk data protection appliances with Sharad Rastogi (@sharadrastogi), Senior VP Product Management,  and Ranga Rajagopalan, Senior Director, Data Protection Appliances Product Management, Dell EMC Data Protection Division (DPD). Howard attended Ranga’s TFDx session (see TFDx videos here) on their new Integrated Data Protection Appliance (IDPA) the DP4400 at VMworld last month in Las Vegas.

This is the first time we have had anyone from Dell EMC DPD on our show. Ranga and Sharad were both knowledgeable about the data protection industry, industry trends and talked at length about the new IDPA DP4400.

Dell EMC IDPA DP4400

The IDPA DP4400 is the latest member of the Dell EMC IDPA product family.  All IDPA products package secondary storage, backup software and other solutions/services to make for a quick and easy deployment of a complete backup solution in your data center.  IDPA solutions include protection storage and software, search and analytics, system management — plus cloud readiness with cloud disaster recovery and long-term retention — in one 2U appliance. So there’s no need to buy any other secondary storage or backup software to provide data protection for your data center.

The IDPA DP4400 grows in place  from 24 to 96TB of usable capacity and at an average 55:1 dedupe ratio, it could support over 5PB of backup storage on the appliance. The full capacity always ships with the appliance. Customers can select how much or little they get to use by just purchasing a software license key.

In addition to the on appliance capacity, the IDPA DP4400 can use up to 192TB of cloud storage for a native Cloud tier. Cloud tiering takes place after a specified, appliance residency interval, after which backup data is moved from the appliance to the cloud. IDPA Cloud Tier works with AWS, Azure, IBM Cloud Object Storage, Ceph and Dell EMC Elastic Cloud Storage. With the 192TB of cloud and 96TB of on appliance usable storage, together with a 55:1 dedupe ratio, a single IDPA DP4400 can support over 14PB of logical backup data.

Furthermore, IDPA supports Cloud DR. With Cloud DR, backed up VMs are copied to the public cloud (AWS) on a scheduled basis. In case of a disaster, there is an orchestrated failover with the VMs spun up in the cloud. The cloud workloads can then easily be failed back on site once the disaster is resolved.

The IDPA DP4400 also comes with native DD Boost™ support. This means Oracle, SQL server and other applications that already support DD Boost can also use the appliance to backup and restore their application data. DD Boost customers can make use of native application services such as Oracle RAC to manage their database backups/restores with the appliance.

Dell EMC also offers their Future-Proof Loyalty Program guarantees for the IDPA DP4400, including a Data Protection Deduplication guarantee, which, if best practices are followed, Dell EMC will guarantee the appliance dedupe ratio for backup data. Additional guarantees from the Dell EMC Future-Proof Program for IDPA DP4400 include a 3-Year Satisfaction guarantee, a Clear Price guarantee which guarantees predictable pricing for future maintenance and service as well as a Cloud Enabled guarantee. These are just a few of the Dell EMC guarantees provided for the IDPA DP4400.

The podcast runs ~16 minutes. Ranga and Sharad were both very knowlegdeable on DP industry, DP trends and the new IDPA DP4400.  Listen to the podcast to learn more.

Sharad Rostogi, Senior V.P. Product Management, Dell EMC Data Protection Division

Sharad Rastogi is a global technology executive with strong track record of transforming businesses and increasing shareholder value across a broad range of leadership roles, in both high growth and turnaround situations.

As SVP of Product Management at Dell EMC, Sharad is responsible for all products for the $3B Data Protection business.  He oversees a diverse portfolio, and is currently developing next generation integrated appliances, software and cloud based data protection solutions. In the past, Sharad has held senior roles in general management, products, marketing, corporate development and strategy at leading companies including Cisco, JDSU, Avid and Bain.

Sharad holds an MBA from the Wharton School at the University of Pennsylvania, an MS in engineering from the Boston University and a B.Tech in engineering from the Indian Institute of Technology in New Delhi.

He is an advisor to Boston University, College of Engineering, and a Board member at Edventure More – a non-profit providing holistic education. Sharad is a world traveler, always seeking new adventures and experiences

Rangaraaj (Ranga) Rajagopalan, Senior Director Data Protection Appliances Product Management, Dell EMC Data Protection Division

Ranga Rajagopalan is Senior Director of Product Management for Data Protection Appliances at Dell EMC. Ranga is responsible for driving the product strategy and vision for Data Protection Appliances, setting and delivering the multi-year roadmap for Data Domain and Integrated Data Protection Appliance.

Ranga has 15 years of experience in data protection, business continuity and disaster recovery, in both India and USA. Prior to Dell EMC, Ranga managed the Veritas Cluster Server and Veritas Resiliency Platform products for Veritas Technologies.

70: GreyBeards talk FMS18 wrap-up and flash trends with Jim Handy, General Dir. Objective Analysis

In this episode we talk about Flash Memory Summit 2018 (FMS18) and recent trends affecting the flash market with Jim Handy, General Director, Objective Analysis. This is the 4th time Jim’s been on our show and has been our go to guy on flash technology forever.

NAND supply?

Talking with Jim is always a far reaching discussion. We quickly centered on recent spot NAND pricing trends. Jim said the market is seeing a 10 to 12% pricing drop, Quarter/Quarter, almost 60% since the year started, in NAND spot pricing which is starting to impact long term contracts. During supply glut’s like this, DRAM spot prices typically drop 40-60% Q/Q, so maybe there’s more NAND price reductions on the way.

A new player in the NAND fab business was introduced at FMS18, Yangtze Memory Technology from China. Jim said they were one generation behind the leaders which says their product costs ($/NAND bit) are likely 2X the industry. But apparently, China is prepared to lose money until they can catch up.

I asked Jim if they have a hope of catching up – yes. For example, there’s been some shenanigans with DRAM technology and a Chinese DRAM Fab. They  have (allegedly)stolen technology from Micron’s Taiwan DRAM FAB. They in turn have sued Micron for patent infringement and won, locking Micron out of the Chinese DRAM market. With DRAM market tightening, Micron’s absence will hurt Chinese electronics producers. Others will step in, but Micron will have to focus DRAM sales elsewhere.

3D Xpoint/Optane?

There wasn’t much discussion on 3D XPoint. Intel did announce some customers for Optane SSDs and that they are starting to produce 3D XPoint in DIMMs. The Intel-Micron 3D XPoint partnership has disolved. Intel seems willing to continue to price their Optane and 3D XPoint DIMM below cost and make it up selling micro processors.

Jim predicted years back there would be little to no market for 3D Xpoint SSDs. With Optane SSDs at 5X higher cost than NAND SSDs and only 5X faster, it’s not a significant enough advantage to generate volumes needed to make a profitable product. But in a DIMM form factor, hanging off the memory bus, it’s 1000X faster than NAND, and with that much performance, it shouldn’t have a problem generating sufficient volumes to become profitable.

Other NAND/SCM news

We talked about the emergence of QLC NAND. With 3D NAND, there appears to be sufficient electrons to make QLC viable. The write speeds are still horrible,  ~1000X slower than SLC. But vendors are now adding SLC NAND (write cache) in their SSDs to sustain faster writes.

The other new technology from FMS18 was computational storage. Computational storage vendors are putting compute near (inside) an SSD to better perform IO intensive workloads. Some computational storage vendors   talked about their technology and how it could speed up select workloads

There’s SCM beyond 3D XPoint. These vendors have been quietly shipping for some time now, they just aren’t at the capacities/bit density to challenge NAND. Jim mentioned a few that were in production, EverSpin/MRAM, Adesto/ReRAM and Crossbar/FeRAM.

Jim said IBM was using EverSpin/MRAM technology in their latest FlashCore Modules for their FlashSystem 9100. And EverSpin MRAM is being used in satellites. Adesto/ReRAM is being used medical instrument market.

The podcast runs ~42 minutes. We apologize for the audio quality, we promise to do better next time. Jim’s been the GreyBeards memory and flash technology guru before our hair turned grey and is always enlightening about the flash market and technology trends.  Listen to the podcast to learn more.

Jim Handy, General Director, Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

69: GreyBeards talk HCI with Lee Caswell, VP Products, Storage & Availability, VMware

Sponsored by:

For this episode we preview VMworld by talking with Lee Caswell (@LeeCaswell), Vice President of Product, Storage and Availability, VMware.

This is the third time Lee’s been on our show, the previous one was back in August of last year. Lee’s been at VMware for a couple of years now and, among other things, is leading the HCI journey at VMware.

The first topic we discussed was VMware’s expanded HCI software defined data center (SDDC) solution, which now includes compute, storage, networking and enhanced operations with alerts/monitoring/automation that ties it all together.

We asked Lee to explain VMware’s SDDC:

  • HCI operates at the edge – with ROBO-2-server environments, VMware’s HCI can be deployed in a closet and remotely operated by a VI from the central site.
  • HCI operates in the data center – with vSphere-vSAN-NSX-vRealize and other software, VMware modernizes data centers for the  pace of digital business..
  • HCI operates in the public Cloud –with VMware Cloud (VMC)  on AWS, IBM Cloud and over 400 service providers, VMware HCI also operates in the public cloud.
  • HCI operates for containers and cloud native apps – with support for containers under vSphere, vSAN and NSX, developers are finding VMware HCI an easy option to run container apps in the data center, at the edge, and in the public cloud.

The importance of the edge will become inescapable, as 50B edge connected devices power IoT by 2020. Lee heard Pat saying compute processing is moving to the edge because of 3 laws:

  1. the law of physics, light/information only travels so fast;
  2. the law of economics, doing all processing at central sites would take too much bandwidth and cost; and
  3. the law(s) of the land, data sovereignty and control is ever more critical in today’s world.

VMware SDDC is a full stack option, that executes just about anywhere the data center wants to go. Howard mentioned one customer he talked with at FMS18, just wanted to take their 16 node VMware HCI rack and clone it forever, to supply infinite infrastructure.

Next, we turned our discussion to Virtual Volumes (VVols). Recently VMware added replication support for VVols. Lee said VMware has an intent to provide a SRM SRA for VVols. But the real question is why hasn’t there been higher field VVol adoption. We concluded it takes time.

VVols wasn’t available in vSphere 5.5 and nowadays, three or more years have to go by before a significant amount of the field moves to a new release. Howard also said early storage systems didn’t implement VVols right. Moreover, VMware vSphere 5.5 is just now (9/16/18) going EoGS.

Lee said 70% of all current vSAN deployments are AFA. With AFA, hand tuning storage performance is no longer something admins need to worry about. It used to be we all spent time defragging/compressing data to squeeze more effective capacity out of storage, but hand capacity optimization like this has become a lost art. Just like capacity, hand tuning AFA performance doesn’t make sense anymore.

We then talked about the coming flash SSD supply glut. Howard sees flash pricing ($/GB) dropping by 40-50%, regardless of interface. This should drive AFA shipments above 70%, as long as the glut continues.

The podcast runs ~21 minutes. Lee’s always great to talk with and is very knowledgeable about the IT industry, HCI in general, and of course, VMware HCI in particular.  Listen to the podcast to learn more.

Lee Caswell, V.P. of Product, Storage & Availability, VMware

Lee Caswell leads the VMware storage marketing team driving vSAN products, partnerships, and integrations. Lee joined VMware in 2016 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Prior to VMware, Lee was vice president of Marketing at NetApp and vice president of Solution Marketing at Fusion-IO. Lee was a founding member of Pivot3, a company widely considered to be the founder of hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at Adaptec, and SEEQ Technology, a pioneer in non-volatile memory. He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.