0101: Greybeards talk with Howard Marks, Technologist Extraordinary & Plenipotentiary at VAST

As most of you know, Howard Marks (@deepstoragenet), Technologist Extraordinary & Plenipotentiary at VAST Data used to be a Greybeards co-host and is still on our roster as a co-host emeritus. When I started to schedule this podcast, it was going to be our 100th podcast and we wanted to invite Howard and the rest of the co-hosts to be on the call to discuss our podcast. But alas, the 100th Greybeards podcast came and went, before we could get it done. So we decided to refocus this podcast back on VAST Data.

We talked with Howard last year about VAST and some of this podcast covers the same ground (see last year’s podcast with Howard on VAST Data) but I highlighted below different aspects of their product that we also discussed.

For starters, VAST just finalized a recent round of funding, which if I recall, valued them at over $1B USD, or yet another data storage unicorn.

VAST is a scale out, disaggregated, unstructured data platform that takes advantage of the economics of QLC SSD (from Intel) combined with the speed of 3D XPoint storage class memory (Optane SSD, also from Intel) to support customer data. Intel is an investor in VAST.

VAST uses mutliple front end (controller) servers, with one or more HA NVMe drive module(s) connected via a dual infiniband or 100Gbps Ethernet RDMA cluster interconnect. The HA NVMe drive module has two (IO modules) adapter cards, one for each connection that takes IO and data requests and transfers them across a PCIe bus which connects to QLC and Optane SSDs. They also have a Mellanox (another investor) switch on their backend with a (round robin) DNS router to connect hosts to their storage (front-end) servers.

Each backend HA NVMe drive module has 12 1.5TB Optane U.2 SSDs and 44 15.4TB QLC SSDs, for a total of 56 drives. Customer data is first written to Optane and then destaged to QLC SSD.

QLC has the advantage of being 4 bits per cell (for a lower $/GB stored) but it’s endurance or drive writes/day (dw/d)) is significantly worse than TLC. So VAST has had to work to increase QLC endurance in their system.

Natively, QLC offers ~0.2 dw/d when doing random 4K writes. However, if your system does 128KB sequential writes, it offers 4.0 dw/d. VAST destages data from Optane SSDs to QLC in 1MB chunks which both optimizes endurance and reduces garbage collection write amplification within the drive.

Howard mentioned their frontend servers are stateless, i.e., maintain no state information about any IO activity going on. Any IO state information is maintained by their system in Optane SSDs. Each server maintains a work log (like) structure on Optane that describes what they are doing in support of host IO and other activities. That way, if one front end server goes down, another one can access its log and take over its activity.

Metadata is also maintained only on Optane SSDs. Howard called their metadata structure a V-tree (B-tree). VAST mirrors all meta-data and customer data to two Optane SSDs. So if one Optane SSD goes down, its pair can be used to continue operations.

In last years podcast we talked at length about VAST data protection and data reduction capabilities so we won’t discuss these any further here.

However, one thing worth noting is that VAST has a very large RAID (erasure code protection) stripe. Data is written to the QLC SSDs in a VAST designed, locally decodable erasure coding format.

One problem with large stripes is rebuild time. VAST’s locally decodable parity codes help with this but the other thing that helps is distributing rebuild IO activity to all front end servers in the system.

The other problem with large stripe sizes is garbage collection. VAST segregates customer data by “temporariness” based on their best guess. In this way all data in one stripe should have similar lifetimes. When it’s time for stripe garbage collection, having all temporary data allows VAST to jettison the whole stripe (or most of it) rather than having to collect and re-write old stripe data to another new stripe.

VAST came out supporting NFSv3 and S3 object storage protocols, Their next release adds support for SMB 2.2, data-at-rest encryption and snapshotting to an external S3 store. As you may recall SMB is a stateful protocol. In VAST’s home grown, SMB implementation, front end servers can take over SMB transactions from other failed servers, without having to fail the whole transaction and start over again.

VAST uses a fail in place, maintenance policy. That is failed SSDs are not normally replaced in customer deployments, rather blocks, pages, or SSDs are marked as failed and the spare capacity available in the drive enclosure is used to provide space for any needed rebuilt data.

VAST offers a 10 year maintenance option where the customer keeps the same storage for 10 full years. That way customers don’t have to migrate data from one system to another until their 10 years are up.

The podcast runs a little under 44 minutes. Howard and I can talk forever. He is always a pleasure to talk with as well as extremely knowledgeable about (VAST) storage and other industry solutions.  The co-hosts and I had a great time talking with him again. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Howard Marks, Technologist Extraordinary and Plenipotentiary, VAST Data, Inc.

Howard Marks brings over forty years of experience as a technology architect for hire and Industry observer to his role as VAST Data’s Technologist Extraordinary and Plienopotentary. In this role, Howard demystifies VAST’s technologies for customers and customer requirements for VAST’s engineers.

Before joining VAST, Howard ran DeepStorage an industry test lab and analyst firm. An award-winning speaker, he has appeared at events on three continents including Comdex, Interop and VMworld.

Howard is the author of several books (all gratefully out of print) and hundreds of articles since Bill Machrone taught him journalism at PC Magazine in the 1980s.

Listeners may also remember that Howard was a founding co-Host of the Greybeards-on-Storage Podcast.

097: GreyBeards talk open source S3 object store with AB Periasamy, CEO MinIO

Ray was at SFD19 a few weeks ago and the last session of the week (usually dead) was with MinIO and they just blew us away (see videos of MinIO’s session here). Ray thought Anand Babu (AB) Periasamy (@ABPeriasamy), CEO MinIO, who was the main presenter at the session, would be a great invite for our GreyBeards podcast. Keith and I had a ball talking with AB.

Why object store

There’s something afoot in object storage space over the last year or so. It seems everybody is looking to deploy object store whether that be on prem, in CoLo facilities and in the cloud. It could be just the mass of data coming online but that trend has remained the same for years no. No it’s something else.

It all starts with AWS and S3. Over the last couple of years AWS has been rolling out new functionality that only works with S3 and this has been driving even more adoption of S3 as well as other object storage solutions.

S3 compatible object stores are available in just about every cloud service, available from major (and minor) storage vendors and in open source from MinIO.

Why S3 is so popular

Because object store is accessed via RestFUL interfaces, traditionally most implementations used their own API to access it. But when AWS created S3 (simple storage service) with their own API/SDK to access it, it somehow became the de-facto standard interface for all other object stores. S3 compatibility became a significant feature that all object stores had to support.

Sometime after that MinIO came into existence. MinIO provides a 100% open source, fully AWS S3 compatible object store that you can run anywhere on prem, in CoLo facilities and indeed in the cloud. In fact, there exist customers that run MinIO in AWS AB says this is probably just customers using a packaged software solution which happens to include MinIO but it’s nonetheless more expensive than AWS S3 as it uses EC2 instances and EBS storage to create an object store

Customers can access MinIO object stores with the AWS S3 SDK or the MinIO SDK. and you can access AWS S3 storage with AWS S3 SDK or use MinIO SDK. Occosionally, AWS S3 updates have broken MinIO’s SDK but these have been later fixed by AWS. It seems AWS and MinIO are on good terms.

AB mentioned that as customers get up to a few PBs of AWS S3 storage they often find the costs to be too high. It’s at this point that they start looking at other object storage solutions. But because MinIO is 100% S3 compatible and it’s open source many of these customers deploy it in their own data center facilities or in colo environments.

For those customers that want it, MinIO also offers an S3 gateway. With the gateway on prem customers can use S3 or standard file services to access S3 object storage located in the cloud. The gateway also works in the public cloud and can support both AWS s3 as well as Microsoft Blob storage as a backend.

MinIO matches AWS S3 features

AWS S3 has a number of great features and MinIO has matched or exceeded them all, step by step. AWS S3 has cross region replication options where customers can replicate S3 data from one region to another. MinIO supports both asynchronous replication of S3 data and synchronous replication (using RADIO).

But MinIO adds support for erasure coding within a fault domain. Default is Nx2 erasure coding which duplicates all your data so as long as 1/2 of your servers and storage are available you continue to have access to all your data. But this can be configured down like 12+4 where data is split accross 16 servers any four of which can fail and you can still access data.

AWS customers can use a Snowball (standalone storage device) to transfer data to or from S3 storage. AWS Snowball implements a subset of S3 API and requires a NAS staging area of equivalent size to migrate data out of S3. MinIO has support for Snowball’s limited S3 API and as such, Snowball’s can be used to migrate data into or out of MinIO. MinIO has a blog post which describes their support for AWS Snowball.

AWS also offers S3 Lambda services or server less computing services where compute services can be invoked when data is loaded in a bucket and then turned off when no longer needed. AWS Lambda depends on AWS messaging and other services to work properly. But MinIO supports Lambda like functionality using other open source services. AB mentions MQTT and Kafka services. MinIO has another blog post discussing their Lambda like services based on Kafka.

AWS recently implemented Snowflake a SQL database server for unstructured data that uses S3 storage to hold data. Ray and Keith almost choked on that statement as unstructured data and databases never used to be uttered in the same breath. But what AWS has shown was that you can use object store for database data as long as you are willing to load the table into memory and process it there and then unload any modified table data back into the object store. Indexing of the object data seems to be done as the data is being loaded and is also being done in a (random IO) cache or in memory and once done can also be unloaded into the object store.

Now Snowflake uses S3 but it’s not available on prem. MinIO has a number of data base partners that make use of their object store as a backend to host a Snowflake like service onprem. AB mentioned Spark and Splunk but there are others as well.

We ended up the discussion with what does it mean to have 20K stars on GitHub. AB said if you did a java script getting 20K stars would be easy but you just don’t see this sort of open source popularity for storage systems. He said the number is interesting but the growth rate is even more interesting.

The podcast runs ~47 minutes. AB was a great to talk tech with. Keith and I could have talked all afternoon with AB. It was very hard to stop the recording as we could have talked with him for another hour or more. AB said he doesn’t like to do podcasts or videos but he had no problem with us firing away questions. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Anand Babu Periasamy, CEO MinIO

AB Periasamy is the CEO and co-founder of MinIO. One of the leading thinkers and technologists in the open source software movement, AB was a co-founder and CTO of GlusterFS which was acquired by RedHat in 2011. Following the acquisition, he served in the office of the CTO at RedHat prior to founding MinIO in late 2015. AB is an active angel investor and serves on the board of H2O.ai and the Free Software Foundation of India.

He earned his BE in Computer Science and Engineering from Annamalai University.

095: GreyBeards talk file sync&share with S. Azam Ali, VP Customer Success at CentreStack

We haven’t talked with a file synch and share vendor in a while now and Matt was interested in the technology. He had been talking with CentreStack, and found that they had been making some inroads in the enterprise. So we contacted S. Azam Ali, VP of Customer Success at CentreStack and asked if he wanted to talk about their product on our podcast.

File synch and share, is part collaboration tool, part productivity tool. With file synch & share many users share the same files, across many different environments and end point devices. It’s especially popular with road warriors that need access to the same files on the road that reside in corporate data centers. With this technology, files updated anywhere would be available to all.

Most file synch&share systems require you to use their storage. But CentreStack just provides synch and share access to NFS and SMB storage that’s already in the data center.

CentreStack doesn’t use VPNs to access data, many other vendor do. But with CentreStack, one just log’s into a website (with AD credentials) and they have immediate browser access to files.

CentreStack uses a gateway VM, that runs in the corporate data center, configured to share files/file directories/shares. We asked whether they were in the data path and Azam said no. However, the gateway does register for file system notifications (e.g. when files are updated, outside CentreStack, they get notified).

CentreStack does maintain meta-data on the files, directories, shares that are under it’s control. Presumably, once an admin sets it up, it goes out and access the file systems that have shared files and populates their meta-data for those files.

CentreStack works with any NFS and SMB file system as well as NAS servers that support these two. It’s unclear whether customers can have more than one gateway server in their data center supporting synch and share but Azam did say that it wasn’t unusual for customers with multi-data centers to have a gateway in each, to support synch&share requirements for each data center.

They use client software on end point devices, which presents the shared files as an external drive (to Mac), presumably a cloud drive for Windows PCs and similar services (in an App) for other systems (IOS, Android phones, iPad, etc.). We believe Azam said Linux was coming soon.

The client software can be configured in cache mode or offline mode:

  • Cache mode – the admin can configure how much space to use on the endpoint device and the software will cache the most recently used files in that space for faster access
  • Offline mode – the software moves all files that the endpoint login can access, to the device.

In cache mode, when users open a file (not in the most recently used cache), there will be some delay as the system retrieves data from the internet and copies it to the endpoint device. Unclear what the delay might be but it’s probably a function of internet speed and load on the gateway, with possibly some overhead for the NFS/SMB/NAS system to supply the data. If there’s not enough space to hold the file, the oldest non-open file is erased from the cache.

In both modes, Centrestack supports cross domain locking. That is, if one client has a file open (for update), all other systems/endpoints may only access the file in read-only mode. After the file is closed. the file can then be opened for update by other users.

When CentreStack clients are used to update files, the data is stored back in the original file systems with versioning. This way if the data is corrupted, admins can easily return back to a known good copy version.

CentreStack also offers a cloud backup and DR service. Gateway admins can request that synch&share files be backed up to cloud storage (AWS S3, Azure Blob and Wasabi). When CentreStack backups file data to the cloud, it also includes metadata information about the files so they can be re-constituted anywhere.

A CentreStack cloud gateway VM can be activated in the cloud to supply access to backed up files. Unclear whether the CentreStack cloud backup has to be restored to block or file storage first or if it just accesses the data on cloud storage directly. But one customers using CentreStack cloud DR would need to run client software in their applications accessing these files.

Wasabi seemed an odd solution to have on their list of supported cloud storage providers, but Azam said for their market, the economics of Wasabi storage were hard to ignore. See our previous podcast with David Friend, Co-Founder& CEO, Wasabi, to learn more about Wasabi.

CentreStack is licensed on a per user basis, not storage capacity bucking industry trends. But they don’t actually own the storage so it makes sense. For CentreStack cloud backup, customers also have to supply the cloud storage.

They also offer a 30 day free trial on their website with unlimited users. We assume this uses CentreStacks cloud gateway and customers bring their own cloud storage to support it.

The podcast runs about 35 minutes. Azam was a bit more marketing than we are used to, but he warmed up once we started asking questions. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

S. Azam Ali, VP of Customer Success, CentreStack

S. Azam Ali, is VP of Customer Success at CentreStack and is an executive with extensive experience in managing global teams including sales, support and consulting services.

Azam’s channel experience includes on-boarding new partners including creation of marketing and training collateral for the partners. Azam is an executive with a passion for customer success and establishing long term relationships and partnerships.

Azam is also an advisor to startups as well as established technology companies.

094: GreyBeards talk shedding light on data with Scott Baker, Dir. Content & Data Intelligence at Hitachi Vantara

Sponsored By:

At Hitachi NEXT 2019 Conference, last month, there was a lot of talk about new data services from Hitachi. Keith and I thought it would be a good time to sit down and talk with Scott Baker (@Kraken-Scuba), Director of Content and Data Intelligence, at Hitachi Vantara about what’s going on with data operations these days and how customers are shedding more light on their data.

Information supply chain

Something Scott said in his opening remarks caught my attention when he mentioned customer information supply chains. The information supply chain is similar to manufacturing supply chains, but it’s all about data. Just like manufacturing supply chains where parts and services come from anywhere and are used to create products/services for customers,

information supply chains are about the data used in their organization operations. Information supply chain data is A) being sourced from many places (or applications); B) being added to by supply chain processing (or other applications); and C) ultimately used by the organization to supply a product/service to customers.

But after the product/service is supplied the similarity between manufacturing and information supply chains breaks down. With the information supply chain, data is effectively indestructible, is infinitely re-useable and can live forever. Who throws data away anymore?

The problem most organizations have with information supply chains is once the product/service is supplied, data is often put away never to be seen again or as Scott puts it, goes dark.

This is where Hitachi Content intelligence (HCI) comes in. HCI is designed to take (unstructured or structured) data and analyze it (using natural language and other processing tools) to surround it with information and other metadata, so that it can become more visible and useful to the organization for the life of its existence.

Customers can also use HCI to extract and blend data streams together, automating the creation of an information rich, data repository. The data repository can readily be searched to re-discover or uncover attributes about the data not visible before.

Scott also mentioned the Hitachi Pentaho Platform which can be used to make real time decision from structured data. Pentaho information can also be fed into HCI to provide more intelligence for your structured data.

But HCI can also be used to analyze other database data as well. For instance, database blob and text elements can be fed to and analyzed by HCI. HCI analysis can include natural language processing and other functionality to tag the data by adding key:value information, all of which can be supplied back to the database or Pentaho to add further value to structured data.

Customers can also use HCI to read and transform database tables into XML files. XML files can be stored in object stores as objects or in file systems. XML data could easily be textually indexed and be searched by various tools to better understand the structured data information

We also talked about Hadoop data that can be offloaded to Hitachi Content Platform (HCP) object storage with a stub left behind. Once data is in HCP, HCI can be triggered to index and add more metadata, which can then later be used to decide when to move data back to Hadoop for further analysis.

Finally, Keith mentioned that he just got back from KubeCon and there was an increasing cry for data being used with containerized applications. Scott mentioned HCP for Cloud Scale, the newest member of the HCP object store family, focused on scale out capabilities to provide highly consistent, object storage performance for customers that need it. Customers running containerized workloads use scale-out capabilities to respond to user demand and now they have on premises object storage that can scale with them, as needs change.

The podcast ran ~24 minutes. Scott was very knowledgeable about data workflows, pipelines and the need for better discovery tools. We had a great time discussing information supply chains and how Hitachi can help customers optimize their data pipelines. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Scott Baker, Director of Content and Data Intelligence at Hitachi Vantara

Scott Baker is, and has been, an active member of the information technology, data analytics, data management, and data protection disciplines for longer than he is willing to admit.

In his present role at Hitachi, Scott is the Senior Director of the Content and Data Intelligence organization focused on Hitachi’s Digital Transformation, Data Management, Data Governance, Data Mobility, Data Protection and Data Analytics solutions which includes Hitachi Content Platform (HCP), HCP Anywhere, HCP Gateway, Hitachi Content Intelligence, and Hitachi Data Protection Solutions.

Scott is a VMware Certified Professional, recognized as a subject matter expert, industry speaker, and author. Scott has been a panelist on topics related to storage, cloud, information governance, data security, infrastructure standardization, and social media topics. His educational background includes an MBA, Master’s & Bachelor’s in Computer Science.

When he’s not working, Scott is an avid scuba diver, underwater photographer, and PADI Scuba Instructor. He has a passion for public speaking, whiteboarding, teaching, and traveling the world.

92: Ray talks AI with Mike McNamara, Sr. Manager, AI Solution Mkt., NetApp

Sponsored By: NetApp

NetApp’s been working in the AI DL (deep learning) space for a long time now and announced their partnership with NVIDIA DGX systems, back in August of 2018. At NetApp Insight, this week they were showing off their new NVIDIA DGX systems reference architectures. These architectures use NetApp AFF A800 storage (for more info on AI DL, checkout Ray’s Learning Machine (deep) Learning posts – part 1, – part 2 and – part3).

Besides the ONTAP AI systems, NetApp also offers

  • FlexPod AI solution based on their partnership with Cisco using UCS C480 ML M5 rack servers which include 8 NVIDA Tesla V100 GPUs and also features NetApp AFF A800 storage for use in core AI DL.
  • NetApp HCI has two configurations with 2- or 3-NVIDIA GPUs that come in 1U or 2U rack servers and run VMware vSphere or RedHad OpenStack/OpenShift software hypervisors suitable for edge or core AI DL.
  • E-series reference architecture that uses the BeeGFS parallel file system and offers InfiniBAND data access for HPC or core AI DL.

On the conference floor, NetApp showed AI DL demos for automotive, financial services, Public Sector and healthcare verticals. They also had a facial recognition application running that could estimate your age and emotional state (I didn’t try it, but Mike said they were hedging the model so it predicted a lower age).

Mike said one healthcare solution was focused on radiological image scans, to identify pathologies from x-Ray, MRI, or CAT scan images. Mike mentioned there was a lot of radiological technologists burn-out due to the volume of work caused by the medical imaging explosion over the last decade or so. Mike said image analysis is something that h AI DL can perform very effectively and doing so would improve the accuracy and reduce the volume of work being done by technologists.

He also mentioned another healthcare application that uses an AI DL app to count TB cells in blood samples and estimate the extent of TB infections. Historically, this has been time consuming, error prone and hard to do in the field. The app uses a microscope with a smart phone and can be deployed and run anywhere in the world.

Mike mentioned a genomics AI DL application that examined DNA sequences and tried to determine its functionality. He also mentioned a retail AI DL facial recognition application that would help women “see” what they would look like with different makeup on.

There was a lot of discussion on NetApp Cloud services at the show, such as Cloud Volume Services and Azure NetApp File (ANF). Both of these could easily be used to implement an AI DL application or be part of an edge to core to cloud data flow for an AI DL application deployment using NetApp Data Fabric.

NetApp also announced a new, all flash StorageGRID appliance that was targeted at heavy IO intensive uses of object store like AI DL model training and data analytics.

Finally, Mike mentioned NetApp’s ecosystem of partners working in the AI space to help customers deploy AI DL algorithms in their industries. Some of these include:

  1. Flexential, Try and Buy AI so that customers could bring them in to supply AI DL expertise to generate an AI DL application using customer data and deploy it on customer cloud or on prem infrastructure .
  2. Core Scientific, AI-as-a-Service, so that customers could purchase a service to implement an AI DL application using customer data and running on Core Scientific infrastructure..
  3. Scale Matrix, Mobile data center AI, so that customers could create an AI DL application and run it on Scale Matrix infrastructure that was transported to wherever the customer wanted it to be run.

We recorded the podcast on the show floor, in a glass booth, so there’s some background noise (sorry about that, but can’t be helped). The podcast is ~27 minutes. Mike is a long time friend and NetApp product expert, recently working in AI DL solutions at NetApp. When I saw Mike at Insight, I just had to ask him about what NetApp’s been doing in the AI DL space. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Mike McNamara, Senior Manager AI Solution Marketing, NetApp

With over 25 years of data management product and solution marketing experience, Mike’s background includes roles of increasing responsibility at NetApp (10+ years), Adaptec, EMC and Digital Equipment Corporation. 

In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he was a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, and a regular contributor to industry journals, and a frequent speaker at events.