104: GreyBeards talk new cloud defined (shared) storage with Siamak Nazari, CEO Nebulon

Ray has known Siamak Nazari (@NebulonInc), CEO Nebulon for three companies now but has rarely had a one (two) on one discussion with him. With Nebulon just emerging from stealth (a gutsy move during the pandemic), the GreyBeards felt it was a good time to get Siamak on the show to tell us what he’s been up to. Turns out he and Nebulon decided it was time to completely rethink/rearchitect shared storage for the new data center.

At his prior company, Siamak spent a lot of time with many customers discussing the problems they had dealing with the complexity of managing, provisioning and maintaining multiple shared storage arrays. Somewhere in all those discussions Siamak saw this as a problem that needed a radical solution. If we could just redo shared storage from the ground up, there might be a solution to all these problems.

Redefining shared storage

Nebulon’s new approach to shared storage starts with an SPU card which replaces SAS RAID cards in a server. But instead of creating SAS RAID groups, the SPU creates a shareable, enterprise class, pool of storage across a throng of servers.

They call a collection of servers with SPUs, Cloud Defined Storage (CDS) and it creates a Nebulon nPod. An nPod essentially consists of multiple servers with SPU cards, with or without attached SSD storage, that are provisioned, managed and monitored via the cloud. Nebulon nPod servers are elements or nodes of a shared storage pool across all interconnected SPU servers in a data center.

In an SPU server with local (SAS, SATA, NVMe) SSD storage, the SPU creates an erasure coded pool of storage which can be used to serve (SAS) LUNs to this or any other SPU attached server in the nPod. In a SPU server without local SSD storage, the SPU provides access to any other SPU server shared storage in the nPod. Nebulon nPods only works with flash storage, it doesn’t support spinning media.

The SPU can supply boot storage for its server. There’s no need to have the CPU running OS code to use nPod shared storage. Yes, the SPU needs power and an active PCIe bus to work, but the functionality of an SPU doesn’t require an operational OS to work. The SPU provides a SAS LUN interface to server CPUs.

Each SPU has dual port access to an inter-cluster (25GbE) interconnect that connects all SPUs to the nPod. The nPod inter-cluster protocol is proprietary but takes advantage of standard TCP/IP services across the network with standard 25GbE switching.

The SPU firmware insures that it stays connected as long as power is available to the server. Customers can have more than one SPU in a server but these would be used for more IO performance. Each SPU also has 32GB of NVRAM for caching purposes and it’s also used for power fail fault tolerance.

In the unlikely case that the server and SPU are completely down (e.g. power outage), clients can still access that SPUs data storage, if it was mirrored (see below). When the SPU server comes back up, it will be resynched with any data that had been changed.

Other Nebulon storage features

Nebulon supports data-at-rest encryption, compression and deduplication for customer data. That way customer data is never in plain text as it travels across the nPod or even within the server from the SPU to SSD storage. Also any customer data written to an nPod can be optionally mirrored and as noted above, is protected via erasure coding.

The SPU also supports snapshotting of customer LUN data. So clients can take copies of LUNs and use these for backups, test, dev, etc. SPUs also support asynchronous or synchronous replication between nPods. For synchronous replication and mirrored data, the originating host only sees the IO complete after the data has been received at the target SPU or nPod.

Metadata for the nPod that defines LUN configurations and which server has LUN data is kept across the cluster in each SPU. But metadata on the location of user data within a server is only kept in that server’s SPU.

We asked Siamak whether nPods support SCM (storage class memory). He said not yet, but they’re looking at SCM NVMe storage for use as a potential metadata and data cache for SPUs.

Nebulon Application Centric storage

All the above storage features are present in most enterprise class storage systems. But what sets Nebulon apart from all other shared storage arrays is that their control plane is entirely in the cloud. That is customers point their browser to Nebulon’s control plane and use it to configure, provision and manage the nPod storage pool. Nebulon supports application templates that can be used to configure nPod storage to support standardized applications, such as VMware VMs, MongoDB, persistent storage for K8S containers, bare metal Linux apps, etc.

With the nPod’s control plane in the cloud it makes provisioning, managing and monitoring storage services much more agile. Nebulon can literally roll out new control plane updatesy to their install base on an almost daily basis. Just like any other cloud based or SAAS application. Customers receive the updated nPod control plane functionality by simply refreshing their browser page.

Nebulon’s GoToMarket

Near the end of our podcast, we asked Siamak about how Nebulon was going to access the market. Nebulon’s goto market is to use server OEMs. That is, they have signed agreements with two (and working on a third) server vendors to sell SPU cards with Nebulon control plane access.

During server purchases, customers configure their servers but now along with SAS RAID card options they will now see an Nebulon SPU option. OEM server vendors will bundle SPU hardware and Nebulon control plane access along with all other server components such as CPU’s, SSDs, NICs, etc, This way, the customer will receive a pre-installed SPU card in their server and will be ready to configure nPod LUNs as soon as the server powers on in their network.

Nebulon will go GA in the 3rd quarter.

The podcast ran ~43 minutes. Siamak has always been a pleasure to talk with and is very knowledgeable about the problems customers have in today’s data center environments. Nebulon has given him and his team the way to rethink storage and address these serious issues. Matt and I had a good time talking with Siamak. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png

Siamak Nazari, CEO Nebulon

Siamak Nazari is the CEO and Co-founder of Nebulon. Siamak has over 25 years of experience working on distributed and highly available systems.

In his position as HPE Fellow and VP, he was responsible for setting technical direction for HPE 3PAR and its portfolio of software and hardware. He worked on HPE 3PAR technology from 2000 to 2018, responsible for designing and implementing distributed memory management and the high availability features of the system.

Prior to joining 3PAR, Siamak was the technical lead for distributed highly available Proxy Filesystem (pxfs) of Sun Cluster 3.0.

0100: GreyBeards talk with Colin Gallagher, VP Dig. Infra. Prod. Mkt. @ Hitachi Vantara

Sponsored By:

We have known Colin Gallagher (@worldc3), VP, Digital Infrastructure Product Marketing at Hitachi Vantara, for a long time and he has always been an all around smart storage guy. Colin’s team at Hitachi Vantara are bringing out a brand new, midrange storage system and we thought it would be a good time to catch up with him and learn about it.

The new Hitachi Vantara VSP E990 Storage System is an all NVMe SSD array for medium sized enterprises that need predictable, high IOPS-low latency performance with enterprise class functionality and world class reliability/availability. We asked Colin why they needed all NVMe levels of performance. Colin replied that many of these data centers are starting to use advanced HPC, AI, and data analytics applications together with their standard Oracle, SAP and Microsoft solutions. These combined workloads have an acute need for predictable, high end performance and enterprise class functionality in order to work well.

The VSP E99O comes from a long heritage of enterprise storage at Hitachi, most recently embodied in the Hitachi VSP 5000. In fact, the VSP E990 uses the same storage OS as the VSP 5000, with changes made to streamline it for use with higher performing, all NVMe storage on a dual controller architecture.

This means all the advanced storage functionality of the high end enterprise VSP 5000 are available on the VSP E990 midrange system, minus some items not pertinent to midrange such as mainframe attach.

Many of the software changes involved cache and cache management. In the VSP E990, cache is now automatically shared and distributed across controllers reducing the performance impact of mirroring. Further, Hitachi has added more cores and higher performing processors as well. As a result, the VSP E990 all NVMe array can provide up to 5.8M IOPS and a best in any networked storage system, IO response time as low as 64 µsec. Colin also mentioned that they have reduced flash drive rebuild times by 80%.

The VSP E990 comes in a 4U base configuration and can offer from ~6TB to up to over 6PB of virtual capacity with drive expansion. In 8U plus controller (on the audio, it was incorrectly stated as 6U, The Eds.), the VSP E990 provides slots for up to 96 NVMe SSDs. Just like all VSP storage, the VSP E990 also offers the Hitachi 100% Data Availability Guarantee, the world’s oldest. Further, the VSP E990 supports 6-9s (99.9999%) reliability.

In addition the VSP E990 also supports Hitachi Adaptive Data Reduction, which compresses and deduplicates data to increase virtual capacity and reduce physical footprint. In the VSP E990, Adaptive Data Reduction uses AI to determine the best time to deduplicate data while at the same time optimizing host IO performance and effective storage capacity.

Hitachi Ops Center

During the last year or so Hitachi Vantara introduced its new Hitachi Ops Center solution to better administer and manage storage and other digital infrastructure. Ops Center now comes with 4 components: Administrator, Protector (copy data management), Automator and Analyzer.

  • Administrator supplies an element manager for VSP, other storage, and digital infrastructure in the data center.
  • Protector provides enterprise class, copy data management to protect, migrate, and archive VSP data storage.
  • Analyzer supports AI analysis of the data center’s storage operations to monitor SLAs, troubleshoot shoot problems, and improve storage performance as well as 3rd party compute, network and storage.
  • Automator supplies a series of templates and services to automate mundane, manual storage and other digital infrastructure tasks required to configure, operate and manage these systems in the data center. Automator provides a number of templates which customers can tailor to automate infrastructure operations such as provisioning an ESXi data store. The templates together with Automator services automatically carry out all the OS, fabric and storage/digital infrastructure tasks and activities required to perform these functions.

Hitachi EverFlex consumption models

Hitachi Vantara is also introducing EverFlex, a new series of consumption models, that any customer can use to provide more financial flexibility in their data center digital infrastructure acquisitions, deployments, and management.

EverFlex offers customers the option to purchase, lease or buy on a pay-as-you-go, cloud-like basis any Hitachi Vantara storage or digital infrastructure. Colin mentioned there were two ways that pay-as-you-go can operate,

  1. Customers pay on pure capacity over time basis. Here the customer would contract for a certain capacity and Hitachi Vantara would install storage/digital infrastructure capacity and would bill them monthly for it.
  2. Customers pay on an SLA over time basis. Here they would contract for a specific SLA, such as IOPS or other performance characteristic and Hitachi Vantara would install and maintain any storage/digital infrastructure to meet that SLA and bill them monthly for it.

Colin said that all Hitachi, world-class services are also now available to be purchased under EverFlex.

The podcast ran ~24 minutes. Colin has always been easy to talk with and very knowledgeable about storage. We were very impressed with the performance and innovation in the VSP E990 as well as Ops Center and EverFlex. Keith and I had fun discussing these solutions with Colin. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Colin Gallagher, VP Digital Infrastructure Product Marketing at Hitachi Vantara

Colin is Vice President for Digital Infrastructure Product Marketing at Hitachi Vantara where he leads product marketing for storage systems, storage software, and converged/hyper-converged solutions.

Over his 25-year career he has lead marketing and product management team teams at several major storage companies. Colin has a passion for telling compelling stories about technical products that help customers solve both business and personal pain – and he enjoys the challenge of telling them in creative ways.

He holds a bachelor’s degree from Georgetown University and an MBA from Northeastern University. Colin tries to put as many miles on his bike as possible, “hangs out” on twitter as @worldc3, and (unlike the GreyBeards) is team Oxford comma.

096: GreyBeards YE2019 IT Industry Trends podcast

In this, our yearend industry wrap up episode, the GreyBeards discuss trends and technologdies impacting the IT industry in 2019 and what’s ahead for 2020. This year we have Matt and Keith on the podcast along with Ray. Just like last year, we start off with NVMeoF.

NVMeoF unleashed

This year just about every major storage vendor announced new systems that either have support for NVMeoF or currently offer NVMeoF on their storage systems. Most offer FC based NVMeoF but a few offer NVMeoF/Ethernet, fewer still offer both.

All of the NVMeoF/Ethernet seem to be using RoCE or iWARP. Unclear if one is more often used that the other, so for now both continue to be used in the market. Some storage vendors are offering NVMeoF as an internal fabric to access storage while still using iSCSI or FC/SCSI to access the data. This works better than SAS but won’t provide all the performance you can get from end-to-end NVMeoF.

NVMeoF is all about increasing IOPS and reducing response times. That and getting ready for SCM SSDs. In the mean time the SSD industry has introduced some very attractive NVMe (NAND) SSDs that in NVMeoF storage system can increase IOPS and reduce latencies.

We talked last year about NVMeoF standards finally stabilizing and this year the rollout across enterprise storage systems is testament to that.

SCM hits the enterprise

Most of us attended an Intel Data Center Event earlier this past yea,r where Optane DC PM was introduced. Optane DC PM is the memory version of Optane SCM (3DX Crosspoint) technology. Intel offers two distinct modes of accessing Optane DC PM as memory: 1) App Direct mode, where data in Optane DC PM persists across power cycles but requires one to use a special AP; and 2) Memory mode where Optane DC PM is cleared during a power cycle, (see our RayOnStorage post Need memory, Intel’s Optane DC PM…).

Vendors seem to be using Optane both memory and SCM technology differently. Pure is using Optane SSDs plugged into their FlashArray as sort of a read cache for customer IO. They suggest for well behaved applications this can reduce IO response times considerably.

Dell EMC introduced SCM as a storage tier and are using their automated storage tiering to move the hottest data to SCM. Oracle’s latest Exadata appliance uses Optane DC PM as both a read and write caching layer.

It won’t be long before every enterprise vendor offers SCM drives in their storage systems with a few offering Optane DC PM as in memory caching technology.

Of course, the big news for Optane DC PM is its use in memory databases, specifically SAP HANA. HANA can take advantage of the (6) TB of memory to to handle larger databases. Keith mentioned that even Microsoft SQL server can take advantage of the additional memory to provide faster responses to queries.

Keith also mentioned that there are some systems out there that can be configured to share Optane memory (or storage). When SAP or other databases use this solution they are able to amortize the cost of the technology over more use cases.

Of course, Optane DC PM are only available on the lastest generation Intel processors. None of us have heard anything from AMD (or Micron) on providing a second source for support of Optane DC PM (or the memory technology itself). Presumably most customers would want a second source for Optane DC PM processor support (as well as the technology)

Cloud enterprise storage hits mainstream

The other thing we saw more of this year is enterprise vendors offering versions of storage in public cloud environments. NetApp was an early proponent of doing this.

We saw at Pure that they have a new Cloud Block Store witch is a re-architected version of FlashArray//X storage using AWS hardware and networking services. We were very impressed with what they have accomplished and it was the subject of more than one late night discussion. Listen to the Keith & Ray show at Pure//Accelerate2019 podcast to learn more.

Matt mentioned Nimble’s cloud volume storage which is cloud adjacent. Most enterprise vendors offer something similar today. They differentiate on how easy it is to configure, use and where (which regions) it’s available in.

NetApp has arguably been at this the longest and has the deepest offerings available from cloud adjacent file and block storage, to offering native enterprise file services for all public cloud environments, to supplying a suite of dedicated data services to surround all of their storage technology operating in public clouds and on premises.

While Dell EMC may have missed the turn to the cloud, they are quickly trying to catch up. Keith mentioned Faction, a Dell partner that offers cloud storage services using VMware with VMC. With Faction and vSAN customers have access to software defined storage that uses cloud hardware to support data services.

What’s driving data growth

There seems to be no end for the need for storage to store data. The GreyBeards point to three trends driving data growth today.

  1. IoT seems to have no bounds. A recent RayOnStorage post Internet of Tires discussed how tire companies were tying their tires to the internet. And that’s just the start, pretty soon every artifact, every device, every manufactured item will have a number of sensors attached all of which will be creating massive amounts of data.
  2. AI ML DL has an insatiable appetite for data. IoT is being used largely to optimize products and services. But it’s DL, with a large dollop of data, that is behind much of that optmization.
  3. SaaS applications is a relatively new application approach that’s being rolled out to more arenas and as it’s online and user oriented, seems to generate lots of data.

Containers storage debate

We closed the podcast with a heavy debate on whether container applications have need for storage. Keith was adamant that containers by their very nature are stateless and that Kubernetes ability to stop and start container applications at will almost requires stateless operations.

Ray was a bit more theoretical on the topic and believed that most container applications today take advantage of some sort of database or other services to store state and that state is just another word for storage.

Keith mentioned encoding as a typical container app. Encoding containers can be fired up and taken down at will without hurting anything but throughput. Yes, but those encoder container apps must access some database or other state information to find out what work is left to do and as they complete their work they update this data as well as store their newly encoded segments. This all involves the use of state information.

In the end, I think we were talking about the same thing but using different terminology. Keith believes that persistent state information is needed and Ray says that this is just another word for (containers) storage. Matt said we probably need Nigel (@NigelPoulton) on the podcast to straighten us both out.

The podcast ran a bit long and could have run longer. Keith and Matt bring systems level perspective to what’s happening in the storage market. But they come at it from different sides. Ray seems to frame everything from a storage perspective. Diverse perspectives lead to a more fuller and interesting discussion. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ray Lucchesi ( @RayLucchesi) is the host of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent blogger at RayOnStorage.com.

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop.


93: GreyBeards talk HPC storage with Larry Jones, Dir. Storage Prod. Mngmt. and Mark Wiertalla, Dir. Storage Prod. Mkt., at Cray, an HPE Enterprise Company

Supercomputing Conference 2019 (SC19) is coming to Denver next week and in anticipation of that show, we thought it would be a good to talk with some HPC storage group. We contacted HPE and given their recent acquisition of Cray, they offered up Larry and Mark to talk about their new ClusterStor E1000 storage system.

There are a number of components that go into Cray supercomputers and besides the ClusterStor, Larry and Mark mentioned their new SlingShot cluster interconnect which is Ethernet based with significant enhancements to congestion handling. But the call focused on ClusterStor.

What is ClusterStor

ClusterStor, is a Lustre file system hardwareappliance. Lustre has always been popular with the HPC crowd as it offered high bandwidth file services. But Lustre often took a team of (PhD) scientists to configure, deploy and run properly because of all the parameters that had to be setup for optimum performance.

Cray’s ClusterStor was designed to make configuring, deploying and running Lustre a lot simpler with a GUI and system defaults that provided an optimal running environment. But if customers still want access to all Lustre features and functionality, all the Lustre parameters can still be tweaked to personalize it.

What sort of appliance

The ClusterStore team has created a Lustre storage appliance using two systems, a 2U-24 NVMe SSD system and a 4U-106 disk drive system. Both systems use PCIe Gen 4 buses which offer 2X the bandwidth of Gen 3 and NVMe Gen 4 SSDs. Each ClusterStore E1000 appliance comes with 2 servers for HA and the storage behind it.

Larry said the 2U NVMe Gen 4 appliance offers 80GB/sec of read and 60GB/sec of write data bandwidth. And a full rack of these, could support ~2.5TB/sec of data bandwidth. One TB/sec seems like an awful lot to the GreyBeards, 2.5TB/sec, out of this world.

We asked if it supported InfiniBAND interconnects? Yes, they said it supports the latest generation of InfiniBAND but it also offers Cray’s own (SlingShot) Ethernet interconnect, unusual for HPC environments. And as in any Lustre parallel file system, servers accessing storage use Lustre client software.

ClusterStor Data Services

But on the backend, where normally one would see only LDISKFS for backend storage, ClusterStor also offers ZFS. Larry and Mark said that LDISKFS is faster but ZFS offers more functionality like snapshots and data compression.

Many of the Top 100 & Top 500 supercomputing environments are starting to deploy ML DL (machine learning-deep learning) workloads along with their normal HPC activities. But whereas HPC work has historically depended on bandwidth to read, write and move large files around, ML DL deals with small files and needs high IOPS. ClusterStor was designed to satisfy both high bandwidth and high IOPS workloads.

In previous HPC Lustre flash solutions, customers had to deal with the complexity of where to place data, such as on flash or on disk. But with net ClusterStor E1000, the system can do all this for you. That is it will move data from disk to flash when it sees an advantage to doing so and move it back again when that advantage is gone. But, just as with Lustre configuration parameters above, customers can still pre-stage data to flash.

The other challenge for HPC environments is extreme size. Cray and others are starting to see requirements for Exascale (exabyte, 10**18) byte) storage systems. In fact, Cray has a couple of ClusterStor E1000 configurations of 400PB or more already, As these systems age they may indeed grow to exceed an exabyte.

With an exabyte of data, systems need to support billions of files/inodes and better metadata services and indexing. ClusterStor offers optimized inode indexing and search to enable HPC users to quickly find the data they are looking for. Further, ClusterStor offers, data at rest encryption and supports virtual file systems, for multi-tenancy.

With a ZFS backend, ClusterStor can supply data compression and snapshots. Cray has tested ZFS compression on HPC scientific ( mostly already application compressed) data and still see ~30% reduction is storage footprint. At an exabyte of storage 30% can be a significant cost reduction

The podcast ran long, ~46 minutes. Larry and Mark had a good knowledge of the HPC storage space and were easy to talk with. Matt’s an old ZFS hand, so wanted to take even more about ZFS. I had a good time discussing ClusterStor and Lustre features/functionalit and how the HPC workloads are changing. Listen to the podcast to learn more. [The podcast was recorded on November 6th, not the 5th as mentioned in the lead in, Ed.]

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Larry Jones, Director Storage Product Management

Larry Jones is a director of storage product management for Cray, a Hewlett Packard Enterprise company.

Jones previously held senior product management roles at Seagate, DDN and Panasas.

Mark Wiertalla, Director Storage Product Marketing

Mark Wiertalla is a product marketing director for Cray, a Hewlett Packard Enterprise company.

Prior to Cray, Wiertalla held product manager roles at EMC and SGI.

92: Ray talks AI with Mike McNamara, Sr. Manager, AI Solution Mkt., NetApp

Sponsored By: NetApp

NetApp’s been working in the AI DL (deep learning) space for a long time now and announced their partnership with NVIDIA DGX systems, back in August of 2018. At NetApp Insight, this week they were showing off their new NVIDIA DGX systems reference architectures. These architectures use NetApp AFF A800 storage (for more info on AI DL, checkout Ray’s Learning Machine (deep) Learning posts – part 1, – part 2 and – part3).

Besides the ONTAP AI systems, NetApp also offers

  • FlexPod AI solution based on their partnership with Cisco using UCS C480 ML M5 rack servers which include 8 NVIDA Tesla V100 GPUs and also features NetApp AFF A800 storage for use in core AI DL.
  • NetApp HCI has two configurations with 2- or 3-NVIDIA GPUs that come in 1U or 2U rack servers and run VMware vSphere or RedHad OpenStack/OpenShift software hypervisors suitable for edge or core AI DL.
  • E-series reference architecture that uses the BeeGFS parallel file system and offers InfiniBAND data access for HPC or core AI DL.

On the conference floor, NetApp showed AI DL demos for automotive, financial services, Public Sector and healthcare verticals. They also had a facial recognition application running that could estimate your age and emotional state (I didn’t try it, but Mike said they were hedging the model so it predicted a lower age).

Mike said one healthcare solution was focused on radiological image scans, to identify pathologies from x-Ray, MRI, or CAT scan images. Mike mentioned there was a lot of radiological technologists burn-out due to the volume of work caused by the medical imaging explosion over the last decade or so. Mike said image analysis is something that h AI DL can perform very effectively and doing so would improve the accuracy and reduce the volume of work being done by technologists.

He also mentioned another healthcare application that uses an AI DL app to count TB cells in blood samples and estimate the extent of TB infections. Historically, this has been time consuming, error prone and hard to do in the field. The app uses a microscope with a smart phone and can be deployed and run anywhere in the world.

Mike mentioned a genomics AI DL application that examined DNA sequences and tried to determine its functionality. He also mentioned a retail AI DL facial recognition application that would help women “see” what they would look like with different makeup on.

There was a lot of discussion on NetApp Cloud services at the show, such as Cloud Volume Services and Azure NetApp File (ANF). Both of these could easily be used to implement an AI DL application or be part of an edge to core to cloud data flow for an AI DL application deployment using NetApp Data Fabric.

NetApp also announced a new, all flash StorageGRID appliance that was targeted at heavy IO intensive uses of object store like AI DL model training and data analytics.

Finally, Mike mentioned NetApp’s ecosystem of partners working in the AI space to help customers deploy AI DL algorithms in their industries. Some of these include:

  1. Flexential, Try and Buy AI so that customers could bring them in to supply AI DL expertise to generate an AI DL application using customer data and deploy it on customer cloud or on prem infrastructure .
  2. Core Scientific, AI-as-a-Service, so that customers could purchase a service to implement an AI DL application using customer data and running on Core Scientific infrastructure..
  3. Scale Matrix, Mobile data center AI, so that customers could create an AI DL application and run it on Scale Matrix infrastructure that was transported to wherever the customer wanted it to be run.

We recorded the podcast on the show floor, in a glass booth, so there’s some background noise (sorry about that, but can’t be helped). The podcast is ~27 minutes. Mike is a long time friend and NetApp product expert, recently working in AI DL solutions at NetApp. When I saw Mike at Insight, I just had to ask him about what NetApp’s been doing in the AI DL space. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Mike McNamara, Senior Manager AI Solution Marketing, NetApp

With over 25 years of data management product and solution marketing experience, Mike’s background includes roles of increasing responsibility at NetApp (10+ years), Adaptec, EMC and Digital Equipment Corporation. 

In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he was a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, and a regular contributor to industry journals, and a frequent speaker at events.