099: GreyBeards talk Folding@Home with Mike Harsch, a longtime enthusiast

Microscopic picture of Coronavirus

Mike Harsch (@harschness) is a personal friend, a computer enthusiast with a particular and enduring interest in distributed systems and GPU computing. MIke’s been a longtime user and proponent of Folding@Home, a distributed system focused on protein dynamics that anyone can download and run on their personal computer(s) or gaming devices.

We started the discussion on the history of distributed processing using home computers. Mike apparently first ran accross these systems in college and was using one in his college dorm room, back in 1997. At the time there was a system called, distributed.net, which was attempting to crack the (RC5-56[bit]) encryption keys used for computer security and offered a $10K prize for solving it. That was solved in 250 days (source: wikipedia article on distributed.net). Distributed.net is still up and working but since then they have moved to ever larger keys.

Next came Seti@Home which was a 2nd gen distributed system. SETI @Home sent out slices of recorded radio telescope spectrum and tasked people’s computers (during screen saving) to analyze that spectrum for alien signals. Seti@Home painted a nice image of the analysis. Seti@Home also used some gamification, where users gained points for analyzing spectrum. Over time they had something like a leader board tracking the top users. Recently, Seti@Home shut down their distributed system and changed their focus to analyze all the results they received from their users. I was a SETI@Home user for a while.

Folding@Home

Folding@Home is 3rd generation distributed computing solution built along the same lines but rather than searching for aliens, with Folding@Home you are running a simulation of what a protein molecule does over time. Mike mentioned that a typical Folding@Home work unit is to simulate a few nanoseconds in the life of a protein and this could take an hour or more on a x86 class multi-core CPU (with less time on GPUs).

Mike mentioned that there was a recent Ask Me Anything (AMA) event on Reddit with the team on Folding@Home answering questions. And on March 15th, the team at Folding@Home clarified how they are helping to solve the COVID-19 pandemic.

Keith has used Folding@Home in the past. And my son was an early user as well.

What Folding@Home does

Fold@Home uses idle CPU or GPU time on home gaming platforms/computers/servers or data center servers. Initially, in October of 2000, it was used to understand protein folding. But nowadays it’s gone beyond just folding, to simulate the life of a protein.

Prior to their turn to concentrate on COVID-19, they usually had ~30K active users, supplying ~100PFlops (100 quintillian x86 double precision floating point operations per second) of compute power. 

You get points for doing Folding@Home work. When Folding@Home was launched it was designed to use a single CPU/single core. Sometime in 2006, they released a SMP version of the code ,which could use multi-cores. Later they released a multi-threaded version which worked better on multi-core CPUs. And within the last few years, they have released a GPU support that could take advantage of the massive numbers of GPU cores available today.

Mike said that Folding@Home work unit GPU is generally 10 to 100X faster than what can be done with multi-core/multi-threaded CPU systems. 

Around Feb 27, Folding@Home announced they were going to focus all their efforts on understanding how to combat the COVID-19 coronavirus. After the announcement, their user count went through the roof, to now ~400K active users/day. This led to throttling requests for work and delays in handling responses. Over the ensuing weeks, (as of 3/18), they seem to have added enough resources to support their current levels of users.

The architecture of the old Folding@Home system was 2 tiered, they had a set of Folding@Home front-end servers that handled web traffic and distributed the work requests/responses to a set of backend servers that supplied work requests to users and combined work results. In their latest rush they seemed to have had to add servers, networking and storage to both tiers.

Sometime around March 25th, Folding@Home became the firsth and only ExaFlop supercomputer, achieving 1.56 (x86) ExaFlops (10**18 FLOPS, source: wikipedia article on Folding@Home) and have over 1 million active computing devices (GPUs & CPUs) in their network (see: Greg Bowwan’s status tweet).

Deploying Folding@Home on your systems

Folding@Home operates on any number of endpoint devices OSs and gaming console -systems. It comes in two software packages, one is the software that logs into the Folding@Home server to gather the next slice of work unit to perform and the other is the one that does the simulation work. They have an option to paint a picture of what is happening but most disable this feature to devote 100% of any idle CPU/GPU resources to the simulation. They also have a support forum, if you have any questions or need assistance in deploying their software.

Keith mentioned that some gal at VMware asked VMware users to devote their home server CPUs/GPUs to the project. I checked their website and they have a vSphere appliance (FLING) that will run Folding@Home and will register itself as joining the VMware team. Mike mentioned that GitHub (announced on Twitter) was going to supply up to 60K CPU core hours a day to the project. They recently reported that they are shifting work units from understanding COVID-19 to screening compounds for therapeutic potential against the coronavirus.

The world needs you to help solve the COVID-19 pandemic. So join up with Folding@Home to do your part. Downloading the software and installing it on a Mac was easy. Just don’t forget to reboot afterwards and then run FAHcontrol and FAHviewer in “Applications/Folding@home” folder to see what’s going on.

The podcast runs a little under 40 minutes. Mike was very knowledgeable about the IT side of Folding@Home, but was less knowledgeable about the biological side of what they are doing.  Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Mike Harsch, a computer

Mike is a long time computer enthusiast with particular interests in distributed systems and GPU computing.  He lives in CO and has a basement full of (GPUs &) computers.

Mike and I have co-coached a local high school, FTC robotics team for the last 4 years. And Mike has been involved with FTC robotics for much longer than that.

097: GreyBeards talk open source S3 object store with AB Periasamy, CEO MinIO

Ray was at SFD19 a few weeks ago and the last session of the week (usually dead) was with MinIO and they just blew us away (see videos of MinIO’s session here). Ray thought Anand Babu (AB) Periasamy (@ABPeriasamy), CEO MinIO, who was the main presenter at the session, would be a great invite for our GreyBeards podcast. Keith and I had a ball talking with AB.

Why object store

There’s something afoot in object storage space over the last year or so. It seems everybody is looking to deploy object store whether that be on prem, in CoLo facilities and in the cloud. It could be just the mass of data coming online but that trend has remained the same for years no. No it’s something else.

It all starts with AWS and S3. Over the last couple of years AWS has been rolling out new functionality that only works with S3 and this has been driving even more adoption of S3 as well as other object storage solutions.

S3 compatible object stores are available in just about every cloud service, available from major (and minor) storage vendors and in open source from MinIO.

Why S3 is so popular

Because object store is accessed via RestFUL interfaces, traditionally most implementations used their own API to access it. But when AWS created S3 (simple storage service) with their own API/SDK to access it, it somehow became the de-facto standard interface for all other object stores. S3 compatibility became a significant feature that all object stores had to support.

Sometime after that MinIO came into existence. MinIO provides a 100% open source, fully AWS S3 compatible object store that you can run anywhere on prem, in CoLo facilities and indeed in the cloud. In fact, there exist customers that run MinIO in AWS AB says this is probably just customers using a packaged software solution which happens to include MinIO but it’s nonetheless more expensive than AWS S3 as it uses EC2 instances and EBS storage to create an object store

Customers can access MinIO object stores with the AWS S3 SDK or the MinIO SDK. and you can access AWS S3 storage with AWS S3 SDK or use MinIO SDK. Occosionally, AWS S3 updates have broken MinIO’s SDK but these have been later fixed by AWS. It seems AWS and MinIO are on good terms.

AB mentioned that as customers get up to a few PBs of AWS S3 storage they often find the costs to be too high. It’s at this point that they start looking at other object storage solutions. But because MinIO is 100% S3 compatible and it’s open source many of these customers deploy it in their own data center facilities or in colo environments.

For those customers that want it, MinIO also offers an S3 gateway. With the gateway on prem customers can use S3 or standard file services to access S3 object storage located in the cloud. The gateway also works in the public cloud and can support both AWS s3 as well as Microsoft Blob storage as a backend.

MinIO matches AWS S3 features

AWS S3 has a number of great features and MinIO has matched or exceeded them all, step by step. AWS S3 has cross region replication options where customers can replicate S3 data from one region to another. MinIO supports both asynchronous replication of S3 data and synchronous replication (using RADIO).

But MinIO adds support for erasure coding within a fault domain. Default is Nx2 erasure coding which duplicates all your data so as long as 1/2 of your servers and storage are available you continue to have access to all your data. But this can be configured down like 12+4 where data is split accross 16 servers any four of which can fail and you can still access data.

AWS customers can use a Snowball (standalone storage device) to transfer data to or from S3 storage. AWS Snowball implements a subset of S3 API and requires a NAS staging area of equivalent size to migrate data out of S3. MinIO has support for Snowball’s limited S3 API and as such, Snowball’s can be used to migrate data into or out of MinIO. MinIO has a blog post which describes their support for AWS Snowball.

AWS also offers S3 Lambda services or server less computing services where compute services can be invoked when data is loaded in a bucket and then turned off when no longer needed. AWS Lambda depends on AWS messaging and other services to work properly. But MinIO supports Lambda like functionality using other open source services. AB mentions MQTT and Kafka services. MinIO has another blog post discussing their Lambda like services based on Kafka.

AWS recently implemented Snowflake a SQL database server for unstructured data that uses S3 storage to hold data. Ray and Keith almost choked on that statement as unstructured data and databases never used to be uttered in the same breath. But what AWS has shown was that you can use object store for database data as long as you are willing to load the table into memory and process it there and then unload any modified table data back into the object store. Indexing of the object data seems to be done as the data is being loaded and is also being done in a (random IO) cache or in memory and once done can also be unloaded into the object store.

Now Snowflake uses S3 but it’s not available on prem. MinIO has a number of data base partners that make use of their object store as a backend to host a Snowflake like service onprem. AB mentioned Spark and Splunk but there are others as well.

We ended up the discussion with what does it mean to have 20K stars on GitHub. AB said if you did a java script getting 20K stars would be easy but you just don’t see this sort of open source popularity for storage systems. He said the number is interesting but the growth rate is even more interesting.

The podcast runs ~47 minutes. AB was a great to talk tech with. Keith and I could have talked all afternoon with AB. It was very hard to stop the recording as we could have talked with him for another hour or more. AB said he doesn’t like to do podcasts or videos but he had no problem with us firing away questions. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Anand Babu Periasamy, CEO MinIO

AB Periasamy is the CEO and co-founder of MinIO. One of the leading thinkers and technologists in the open source software movement, AB was a co-founder and CTO of GlusterFS which was acquired by RedHat in 2011. Following the acquisition, he served in the office of the CTO at RedHat prior to founding MinIO in late 2015. AB is an active angel investor and serves on the board of H2O.ai and the Free Software Foundation of India.

He earned his BE in Computer Science and Engineering from Annamalai University.

096: GreyBeards YE2019 IT Industry Trends podcast

In this, our yearend industry wrap up episode, the GreyBeards discuss trends and technologdies impacting the IT industry in 2019 and what’s ahead for 2020. This year we have Matt and Keith on the podcast along with Ray. Just like last year, we start off with NVMeoF.

NVMeoF unleashed

This year just about every major storage vendor announced new systems that either have support for NVMeoF or currently offer NVMeoF on their storage systems. Most offer FC based NVMeoF but a few offer NVMeoF/Ethernet, fewer still offer both.

All of the NVMeoF/Ethernet seem to be using RoCE or iWARP. Unclear if one is more often used that the other, so for now both continue to be used in the market. Some storage vendors are offering NVMeoF as an internal fabric to access storage while still using iSCSI or FC/SCSI to access the data. This works better than SAS but won’t provide all the performance you can get from end-to-end NVMeoF.

NVMeoF is all about increasing IOPS and reducing response times. That and getting ready for SCM SSDs. In the mean time the SSD industry has introduced some very attractive NVMe (NAND) SSDs that in NVMeoF storage system can increase IOPS and reduce latencies.

We talked last year about NVMeoF standards finally stabilizing and this year the rollout across enterprise storage systems is testament to that.

SCM hits the enterprise

Most of us attended an Intel Data Center Event earlier this past yea,r where Optane DC PM was introduced. Optane DC PM is the memory version of Optane SCM (3DX Crosspoint) technology. Intel offers two distinct modes of accessing Optane DC PM as memory: 1) App Direct mode, where data in Optane DC PM persists across power cycles but requires one to use a special AP; and 2) Memory mode where Optane DC PM is cleared during a power cycle, (see our RayOnStorage post Need memory, Intel’s Optane DC PM…).

Vendors seem to be using Optane both memory and SCM technology differently. Pure is using Optane SSDs plugged into their FlashArray as sort of a read cache for customer IO. They suggest for well behaved applications this can reduce IO response times considerably.

Dell EMC introduced SCM as a storage tier and are using their automated storage tiering to move the hottest data to SCM. Oracle’s latest Exadata appliance uses Optane DC PM as both a read and write caching layer.

It won’t be long before every enterprise vendor offers SCM drives in their storage systems with a few offering Optane DC PM as in memory caching technology.

Of course, the big news for Optane DC PM is its use in memory databases, specifically SAP HANA. HANA can take advantage of the (6) TB of memory to to handle larger databases. Keith mentioned that even Microsoft SQL server can take advantage of the additional memory to provide faster responses to queries.

Keith also mentioned that there are some systems out there that can be configured to share Optane memory (or storage). When SAP or other databases use this solution they are able to amortize the cost of the technology over more use cases.

Of course, Optane DC PM are only available on the lastest generation Intel processors. None of us have heard anything from AMD (or Micron) on providing a second source for support of Optane DC PM (or the memory technology itself). Presumably most customers would want a second source for Optane DC PM processor support (as well as the technology)

Cloud enterprise storage hits mainstream

The other thing we saw more of this year is enterprise vendors offering versions of storage in public cloud environments. NetApp was an early proponent of doing this.

We saw at Pure that they have a new Cloud Block Store witch is a re-architected version of FlashArray//X storage using AWS hardware and networking services. We were very impressed with what they have accomplished and it was the subject of more than one late night discussion. Listen to the Keith & Ray show at Pure//Accelerate2019 podcast to learn more.

Matt mentioned Nimble’s cloud volume storage which is cloud adjacent. Most enterprise vendors offer something similar today. They differentiate on how easy it is to configure, use and where (which regions) it’s available in.

NetApp has arguably been at this the longest and has the deepest offerings available from cloud adjacent file and block storage, to offering native enterprise file services for all public cloud environments, to supplying a suite of dedicated data services to surround all of their storage technology operating in public clouds and on premises.

While Dell EMC may have missed the turn to the cloud, they are quickly trying to catch up. Keith mentioned Faction, a Dell partner that offers cloud storage services using VMware with VMC. With Faction and vSAN customers have access to software defined storage that uses cloud hardware to support data services.

What’s driving data growth

There seems to be no end for the need for storage to store data. The GreyBeards point to three trends driving data growth today.

  1. IoT seems to have no bounds. A recent RayOnStorage post Internet of Tires discussed how tire companies were tying their tires to the internet. And that’s just the start, pretty soon every artifact, every device, every manufactured item will have a number of sensors attached all of which will be creating massive amounts of data.
  2. AI ML DL has an insatiable appetite for data. IoT is being used largely to optimize products and services. But it’s DL, with a large dollop of data, that is behind much of that optmization.
  3. SaaS applications is a relatively new application approach that’s being rolled out to more arenas and as it’s online and user oriented, seems to generate lots of data.

Containers storage debate

We closed the podcast with a heavy debate on whether container applications have need for storage. Keith was adamant that containers by their very nature are stateless and that Kubernetes ability to stop and start container applications at will almost requires stateless operations.

Ray was a bit more theoretical on the topic and believed that most container applications today take advantage of some sort of database or other services to store state and that state is just another word for storage.

Keith mentioned encoding as a typical container app. Encoding containers can be fired up and taken down at will without hurting anything but throughput. Yes, but those encoder container apps must access some database or other state information to find out what work is left to do and as they complete their work they update this data as well as store their newly encoded segments. This all involves the use of state information.

In the end, I think we were talking about the same thing but using different terminology. Keith believes that persistent state information is needed and Ray says that this is just another word for (containers) storage. Matt said we probably need Nigel (@NigelPoulton) on the podcast to straighten us both out.

The podcast ran a bit long and could have run longer. Keith and Matt bring systems level perspective to what’s happening in the storage market. But they come at it from different sides. Ray seems to frame everything from a storage perspective. Diverse perspectives lead to a more fuller and interesting discussion. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ray Lucchesi ( @RayLucchesi) is the host of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent blogger at RayOnStorage.com.

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop.


93: GreyBeards talk HPC storage with Larry Jones, Dir. Storage Prod. Mngmt. and Mark Wiertalla, Dir. Storage Prod. Mkt., at Cray, an HPE Enterprise Company

Supercomputing Conference 2019 (SC19) is coming to Denver next week and in anticipation of that show, we thought it would be a good to talk with some HPC storage group. We contacted HPE and given their recent acquisition of Cray, they offered up Larry and Mark to talk about their new ClusterStor E1000 storage system.

There are a number of components that go into Cray supercomputers and besides the ClusterStor, Larry and Mark mentioned their new SlingShot cluster interconnect which is Ethernet based with significant enhancements to congestion handling. But the call focused on ClusterStor.

What is ClusterStor

ClusterStor, is a Lustre file system hardwareappliance. Lustre has always been popular with the HPC crowd as it offered high bandwidth file services. But Lustre often took a team of (PhD) scientists to configure, deploy and run properly because of all the parameters that had to be setup for optimum performance.

Cray’s ClusterStor was designed to make configuring, deploying and running Lustre a lot simpler with a GUI and system defaults that provided an optimal running environment. But if customers still want access to all Lustre features and functionality, all the Lustre parameters can still be tweaked to personalize it.

What sort of appliance

The ClusterStore team has created a Lustre storage appliance using two systems, a 2U-24 NVMe SSD system and a 4U-106 disk drive system. Both systems use PCIe Gen 4 buses which offer 2X the bandwidth of Gen 3 and NVMe Gen 4 SSDs. Each ClusterStore E1000 appliance comes with 2 servers for HA and the storage behind it.

Larry said the 2U NVMe Gen 4 appliance offers 80GB/sec of read and 60GB/sec of write data bandwidth. And a full rack of these, could support ~2.5TB/sec of data bandwidth. One TB/sec seems like an awful lot to the GreyBeards, 2.5TB/sec, out of this world.

We asked if it supported InfiniBAND interconnects? Yes, they said it supports the latest generation of InfiniBAND but it also offers Cray’s own (SlingShot) Ethernet interconnect, unusual for HPC environments. And as in any Lustre parallel file system, servers accessing storage use Lustre client software.

ClusterStor Data Services

But on the backend, where normally one would see only LDISKFS for backend storage, ClusterStor also offers ZFS. Larry and Mark said that LDISKFS is faster but ZFS offers more functionality like snapshots and data compression.

Many of the Top 100 & Top 500 supercomputing environments are starting to deploy ML DL (machine learning-deep learning) workloads along with their normal HPC activities. But whereas HPC work has historically depended on bandwidth to read, write and move large files around, ML DL deals with small files and needs high IOPS. ClusterStor was designed to satisfy both high bandwidth and high IOPS workloads.

In previous HPC Lustre flash solutions, customers had to deal with the complexity of where to place data, such as on flash or on disk. But with net ClusterStor E1000, the system can do all this for you. That is it will move data from disk to flash when it sees an advantage to doing so and move it back again when that advantage is gone. But, just as with Lustre configuration parameters above, customers can still pre-stage data to flash.

The other challenge for HPC environments is extreme size. Cray and others are starting to see requirements for Exascale (exabyte, 10**18) byte) storage systems. In fact, Cray has a couple of ClusterStor E1000 configurations of 400PB or more already, As these systems age they may indeed grow to exceed an exabyte.

With an exabyte of data, systems need to support billions of files/inodes and better metadata services and indexing. ClusterStor offers optimized inode indexing and search to enable HPC users to quickly find the data they are looking for. Further, ClusterStor offers, data at rest encryption and supports virtual file systems, for multi-tenancy.

With a ZFS backend, ClusterStor can supply data compression and snapshots. Cray has tested ZFS compression on HPC scientific ( mostly already application compressed) data and still see ~30% reduction is storage footprint. At an exabyte of storage 30% can be a significant cost reduction

The podcast ran long, ~46 minutes. Larry and Mark had a good knowledge of the HPC storage space and were easy to talk with. Matt’s an old ZFS hand, so wanted to take even more about ZFS. I had a good time discussing ClusterStor and Lustre features/functionalit and how the HPC workloads are changing. Listen to the podcast to learn more. [The podcast was recorded on November 6th, not the 5th as mentioned in the lead in, Ed.]

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Larry Jones, Director Storage Product Management

Larry Jones is a director of storage product management for Cray, a Hewlett Packard Enterprise company.

Jones previously held senior product management roles at Seagate, DDN and Panasas.

Mark Wiertalla, Director Storage Product Marketing

Mark Wiertalla is a product marketing director for Cray, a Hewlett Packard Enterprise company.

Prior to Cray, Wiertalla held product manager roles at EMC and SGI.

92: Ray talks AI with Mike McNamara, Sr. Manager, AI Solution Mkt., NetApp

Sponsored By: NetApp

NetApp’s been working in the AI DL (deep learning) space for a long time now and announced their partnership with NVIDIA DGX systems, back in August of 2018. At NetApp Insight, this week they were showing off their new NVIDIA DGX systems reference architectures. These architectures use NetApp AFF A800 storage (for more info on AI DL, checkout Ray’s Learning Machine (deep) Learning posts – part 1, – part 2 and – part3).

Besides the ONTAP AI systems, NetApp also offers

  • FlexPod AI solution based on their partnership with Cisco using UCS C480 ML M5 rack servers which include 8 NVIDA Tesla V100 GPUs and also features NetApp AFF A800 storage for use in core AI DL.
  • NetApp HCI has two configurations with 2- or 3-NVIDIA GPUs that come in 1U or 2U rack servers and run VMware vSphere or RedHad OpenStack/OpenShift software hypervisors suitable for edge or core AI DL.
  • E-series reference architecture that uses the BeeGFS parallel file system and offers InfiniBAND data access for HPC or core AI DL.

On the conference floor, NetApp showed AI DL demos for automotive, financial services, Public Sector and healthcare verticals. They also had a facial recognition application running that could estimate your age and emotional state (I didn’t try it, but Mike said they were hedging the model so it predicted a lower age).

Mike said one healthcare solution was focused on radiological image scans, to identify pathologies from x-Ray, MRI, or CAT scan images. Mike mentioned there was a lot of radiological technologists burn-out due to the volume of work caused by the medical imaging explosion over the last decade or so. Mike said image analysis is something that h AI DL can perform very effectively and doing so would improve the accuracy and reduce the volume of work being done by technologists.

He also mentioned another healthcare application that uses an AI DL app to count TB cells in blood samples and estimate the extent of TB infections. Historically, this has been time consuming, error prone and hard to do in the field. The app uses a microscope with a smart phone and can be deployed and run anywhere in the world.

Mike mentioned a genomics AI DL application that examined DNA sequences and tried to determine its functionality. He also mentioned a retail AI DL facial recognition application that would help women “see” what they would look like with different makeup on.

There was a lot of discussion on NetApp Cloud services at the show, such as Cloud Volume Services and Azure NetApp File (ANF). Both of these could easily be used to implement an AI DL application or be part of an edge to core to cloud data flow for an AI DL application deployment using NetApp Data Fabric.

NetApp also announced a new, all flash StorageGRID appliance that was targeted at heavy IO intensive uses of object store like AI DL model training and data analytics.

Finally, Mike mentioned NetApp’s ecosystem of partners working in the AI space to help customers deploy AI DL algorithms in their industries. Some of these include:

  1. Flexential, Try and Buy AI so that customers could bring them in to supply AI DL expertise to generate an AI DL application using customer data and deploy it on customer cloud or on prem infrastructure .
  2. Core Scientific, AI-as-a-Service, so that customers could purchase a service to implement an AI DL application using customer data and running on Core Scientific infrastructure..
  3. Scale Matrix, Mobile data center AI, so that customers could create an AI DL application and run it on Scale Matrix infrastructure that was transported to wherever the customer wanted it to be run.

We recorded the podcast on the show floor, in a glass booth, so there’s some background noise (sorry about that, but can’t be helped). The podcast is ~27 minutes. Mike is a long time friend and NetApp product expert, recently working in AI DL solutions at NetApp. When I saw Mike at Insight, I just had to ask him about what NetApp’s been doing in the AI DL space. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Mike McNamara, Senior Manager AI Solution Marketing, NetApp

With over 25 years of data management product and solution marketing experience, Mike’s background includes roles of increasing responsibility at NetApp (10+ years), Adaptec, EMC and Digital Equipment Corporation. 

In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he was a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, and a regular contributor to industry journals, and a frequent speaker at events.