131: GreyBeards talk native K8s data protection using Veritas NetBackup with Reneé Carlisle

The GreyBeards have been discussing K8s storage services a lot over the last year or so and it was time to understand how container apps and data could be protected. Recently, we saw an article about a Veritas funded survey, discussing the need for data protection in K8s. As such, it seemed a good time to have a talk with Reneé Carlisle (@VeritasTechLLC), Staff Product Manager for NetBackup (K8S), Veritas.

It turns out that Veritas NetBackup (NBU) has just released their 2nd version of K8s data protection. It’s gone completely (K8s) native. That is, Veritas have completely re-implemented all 3 tiers of NBU as K8s micro services. Moreover, the new release still supports all other NBU infrastructure implementations, such as bare metal or VM NBU primary server/media server services. It’s almost like you have all the data protection offered by NBU for the enterprise over the years, now also available for K8s container apps. Listen to the podcast to learn more.

To make use of NBU K8s, backup admins establish named gold, silver, bronze backup policies selecting frequency of backups, retention periods, backup storage, etc. Then DevOps would tag a namespace, pods, containers, or PVs with those data protection policy names. Once this is done, NBU K8S will start protecting that namespace, pod, container, or PV.

In addition, backup admins can include or exclude specific K8s namespace(s), pod(s), container(s), labels (tags), or PVs to be backed up with a specific policy. When that policy is triggered it will go out into the cluster to see if those K8s elements are active and start protecting them or excluding them from protection as requested.

NBU K8s has an Operator service, Data Mover services and other micro services that execute in the cluster. That is, at least one Operator service must be deployed in the cluster (recommended to be in a separate namespace but this is optional). The Operator service is the control plane for NBU K8S services. It will spin up data movers when needed and spin them down when done.

The Operator service supports a CLI but more importantly to DevOps, a complete implemented RESTful API service. Turns out the CLI is implemented ontop of the NBU (Operator) API. With the NBU API DevOps CI/CD tools or other automation can perform all the data protection services to protect K8s.

One historical issue with backup processing is that it can consume every ounce of network/storage and sometimes compute power in an environment. The enterprise class data movers (or maybe the Operator control plane) has various mechanisms to constrain or limit NBU K8S resource consumption so that this doesn’t become a problem.

But as the Operator and its Data Mover are just micro services, if there’s need for more throughput, more can be spun up or if there’s a need to reduce bandwidth, some of them can be spun down, all with no manual intervention whatsoever.

Furthermore, NBU K8s can be used to restore/recover PVs, containers, applications or namespaces to other, CNCF compliant K8s infrastructure. So, if you wanted to say, move your K8s namespace from AKS to GKE or onprem to RedHat OpenShift, it becomes a simple matter of moving the last NBU backup to the target environment, deploying NBU K8s in that environment and restoring the namespace.

NBU K8s can also operate in the cloud just as well as on prem and works in any CNCF compatible K8s environment which includes AKS, EKS, GKE, VMware Tanzu and OpenShift.

In the latest NBU K8s they implemented new, enterprise class Data Movers as micro services in order to more efficiently protect and recover K8S resources. Enterprise class Data Movers can perform virus-scanning/ransomware detection, encryption, data compression, and other services that enterprise customers have come to expect from NBU data protection.

NBU K8S accesses PV data, container, pod and namespace data and metadata using standard CSI storage provider and normal K8s API services.

As mentioned earlier, in the latest iteration of NBU K8s, they have completely implemented their NBU infrastructure, natively as containers. That adds, K8s auto-scaling, full CI/CD automation via APIs, to all the rest of NBU infrastructure operating completely in the K8s cluster.

So, now backup admins can run NBU completely in K8s or run just the Operator and its data mover services connecting to other NBU infrastructure (primary server and media servers) executing elsewhere in the data center.

NBU K8s supports all the various, disk, dedicated backup appliances, object/cloud storage or other backup media options that NBU uses. So that means you can store your K8s backup data on the cloud, in secondary storage appliances, or anyplace else that’s supported by NBU.

Licensing for NBU K8s follows the currently available Veritas licensing such as front end TB protected, subscription and term licensing options are available.

Reneé Carlisle, Staff Product Manager, Veritas NetBackup (K8S)

Reneé (LinkedIn) has been with Veritas Technologies for eleven years in various focus areas within the NetBackup Product Management Team.  In her current role she is the Product Manager responsible for the NetBackup strategic direction of Modern Platforms including Kubernetes and OpenStack.   She has a significant technical background into many of the NetBackup features including Kubernetes, virtualization, Accelerator, and cloud.  

Prior to working for Veritas, she was a customer running a large-scale NetBackup operation as well as a partner implementing, designing, and integrating NetBackup in many different companies.

125: GreyBeard talk K8s storage with Tad Lebeck, US CTO for ionir

We had some technical difficulties with Matt getting on the podcast so, Ray had to fly solo. This month we continue our investigations into K8s storage with a discussion with Tad Lebeck (@TadLebeck) US CTO, ionir, a software defined storage system that only runs under K8s. ionir Kubernetes Data Services platform is an outgrowth of Reduxio a “tin-wrapped” software defined storage system which pivoted to K8s as the environment to target and left the tin behind.

ionir offers a deduplicating, continuous data protection storage system for PVs (persistent volumes) under K8s that uses 3 way mirroring, across data nodes for data protection. Their solution offers a number of unique services that we haven’t seen in other K8s storage systems. Listen to the podcast to learn more.

Tad opened with a long spiel on what ionir is and we spent the next 40 minutes unpacking that to understand what exactly they were doing.

Let’s start with why stateful containers are all the rage these days. Tad had a slightly different rationale than we’ve heard before. From his perspective, it all comes from current enterprise applications that used database servers/machines. As these apps are re-factored to run as K8s containerized micro services, developers need and want their data be containerized right along with the application.

ionir constructs a block storage system across K8s data nodes or K8s worker nodes with direct attached storage. In the cloud, this storage can be ephemeral (storage that only exists as long as the compute instance operates) or normal block storage (e.g., EBS in AWS). It’s unclear how ephemeral works on-prem. But in any case, they cluster together a set of data nodes into one massive block storage and map PVs onto that. K8s data nodes can be added to the ionir cluster while it’s operating.

As mentioned earlier, they use 3-way mirroring for data protection and ionir insures the 3 copies are stored on different data nodes. As such, when one data node goes down, copies of PV data are available from the other 2 nodes and the data can then be rewritten elsewhere to insure 3-way mirroring continues. We suppose this means a minimum configuration requires at least 3 data nodes.

ionir also provides deduplicating block storage, which should theoretically reduce physical storage footprint for any PV. Data blocks are deduplicated across the cluster. ionir also has a metadata service (also 3 way replicated, to different data space) that records the manifest for all blocks associated with a PV, their hashes and (logical/physical) locations.

There was no mention of data compression or encryption so those are probably not present. We find deduplication very effective for backup storage but less effective for primary storage. Any deduplication ratio for ionir primary storage is likely specific to data being stored, i.e. columnar database, row database, text, office files, etc. Each of these would likely have different dedupe ratios for primary storage.

Furthermore, ionir supplies continuous data protection (CDP) for PV data. PV data written to ionir is immutable, i.e., never modified AND they keep previous versions of PV blocks in storage until they age out. This allows ionir to provide any prior version (well most recent ones) of a PV. ionic uses a timestamp to distinguish different PV versions. So, if ransomware attacked your site, users could ask for a PV version just prior to the time of the attack and you’d have that version of the PV to restart operations. Customer’s can limit how far back ionir saves prior versions of blocks for PVs.

Having CDP for PVs, makes DevOps qualification and testing significantly faster. Normally DevOps would need to copy production data to test environments in order to validate new app code. But ionir can easily instantiate a separate copy of any PV (at any time in their saved set) in a matter of seconds. This can take DevOps deployment testing down from days to minutes or less.

In addition, ionir can teleport PV data to other, remote K8s clusters running ionir. Essentially, this copies PV metadata and it’s “hot” blocks over to any remote ionir cluster. During teleportation, the remote cluster can access PV data as soon as all PV metadata has been copied. The remote site accesses this PV data from the originating cluster (albeit much slower than accesses within the cluster) while “hot” blocks are being copied. Any writes, at the remote site, to PV data would be considered new data, deduplicated at the remote site, and only available at the remote site. Somewhat surprisingly, all of the PV’s data is never copied to the remote system, leaving the PV in a permanent teleported access mode.

Not sure we like the implications of teleporting PVs, from a data integrity perspective. It does make for near-instant access to PV data from other clusters and offers a solution to data gravity (it takes forever to move TB of data across the web), it’s incomplete, as the data is never fully copied to the remote site. Once hot blocks have been copied, remote cluster PV access should run faster. But If there’s 20% of the requested blocks, not in the heat map, those IOs will take 100s mseclonger, depending on wire distance between the sites, to perform. And the write’s at the remote site cause the two copies (one at source site and one at remote site) of the PV to diverge.

Their storage system is priced on a per data node basis which makes it easy to price out their various deployment options. And it works on any K8s standard environment, although Tad admits they haven’t tested VMware Tanzu yet, but they have tested it on GCP, Microsoft Azure, AWS, and Red Hat OpenShift.

They offer a fully functional free trial of ionir storage, only capped at the number of data nodes in use. So, if you only need a small amount of storage (ok 3 data nodes with 24 14TB SSDs each make for large amount of storage) for your K8s environment, you can probably run forever on the free version.

Tad Lebeck, US CTO, ionir

Tad Lebeck is a global technology executive with over two decades of experience in startups and large vendors. Prior to ionir, he founded and led Nuvoloso, an innovator in Kubernetes data services. Earlier, Lebeck served as CTO at Huawei Symantec Technologies, Vice President at Symantec/Veritas, co-founder/CTO at Invio, and CTO at Legato Systems, where he helped create the modern enterprise data-protection market.

Tad was a founding member of the SNIA Technical Council. He earned an MS/CS from the University of Wisconsin, and a combined MBA from the Columbia, London, and HKU Schools of Business.

106: Greybeards talk Intel’s new HPC file system with Kelsey Prantis, Senior Software Eng. Manager, Intel

We had talked with Intel at Storage Field Day 20 (SFD20), about a month ago. At the virtual event, Intel’s focus was on their Optane PMEM (persistent memory) technology. Kelsey Prantis (@kelseyprantis), Senior Software Engineering Manager, Intel was on the show and gave an introduction into Intel’s DAOS (Distributed Architecture Object Storage, DAOS.io) a new HPC (high performance computing, super computers) file system they developed from scratch to use leading edge, Intel technologies, and Optane PMEM was one of them.

Kelsey has worked on LUSTRE and other HPC file systems for a long time now and came into the company from the acquisition of Whamcloud. Currently, she manages the development team working on DAOS. DAOS is a new HPC object storage file system which is completely open source (available on GitHub).

DAOS was designed from the start to take advantage of NVMe SSDs and Optane PMEM. With PMEM, current servers can support up to 20TB of memory. Besides the large memory sizes, Optane PMEM also offers non-volatile memory and byte addressability (just like DRAM). These two characteristics opens up new functionality that allows DAOS to move beyond legacy, block oriented, storage architectures that have been the only storage solution for HPC (and the enterprise) for decades now.

What’s different about DAOS

DAOS uses PMEM for all metadata and for storing small files. HPC IO has always focused on heavy bandwidth (IO using large blocks) oriented but lately newer applications have emerged, such as AI/ML/DL, data analytics and others, that use smaller files/blocks. Indeed, most new HPC clusters and supercomputers are deploying almost as many GPUs as CPUs in their configurations to support AI activities.

The problem is that these newer applications typically consume much smaller files. Matt mentioned one HPC client he worked with was processing small batches of seismic data, to predict, in real time, earthquakes that were happening around the world.

By using PMEM for metadata and small files, DAOS can be much more responsive to file requests (open, close, delete, status) as well as provide higher performing IO for small files. All this leads to a much better performing system for the new HPC workloads as well as great sustainable performance for the more traditional large file workloads.

DAOS storage

DAOS provides a cluster storage system that can be configured with from 1 (no data protection), but more normally 3 nodes (with data protection) at a minimum to 512 nodes (lab tested). Data protection in DAOS is currently based on mirroring data and can use from 0 to the number of nodes in a cluster as data mirrors.

DAOS system nodes are homogeneous. That is they all come with the same amount of PMEM and NVMe SSDs. Note, DAOS doesn’t support disk drives. Kelsey mentioned DAOS node hardware can be tailored to suit any particular application environment. But they typically require an average of 6% of overall DAOS system capacity in PMEM for metadata and small file activity.

DAOS current supports their own API, POSIX, HDFS5, MPIIO and Apache Spark storage protocols. Kelsey mentioned that standard POSIX uses a pessimistic conflict resolution mode which leads to performance bottlenecks during parallel access. In contrast, DAOS’s versos of POSIX uses optimistic conflict resolution, which means DAOS starts writes assuming there’s no conflict, but if one occurs it handles the conflict in real time. Of course with all the metadata byte addressable and in PMEM this doesn’t take up a lot of (IO) time.

As mentioned earlier, DAOS data protection uses mirror-replicas. However, unlike most other major file systems, DAOS mirroring can be done at the object level. DAOS internally is an object store. Data organization on DAOS starts at the pool level, underneath that is data containers, and then under that are objects. Any object in DAOS can have its own mirroring configuration. DAOS is working towards supporting Erasure Coding as another form of data protection for a future release.

DAOS performance

There’s a new storage benchmark that was developed specifically for HPC, called the IO500. The IO500 benchmark simulates a number of different HPC workloads, measures performance for each of them, and computes an (aggregate) performance score to rank HPC storage systems.

IO500 ranks system performance using two lists: one is for any sized configuration that typically range from 50 to 1000s of nodes and their other list limits the configuration to 10 nodes. The first performance ranking can sometimes be gamed by throwing more hardware into a cluster. The 10 node rankings are much harder to game this way and from our perspective, show a fairer comparison of system performance.

As presented (virtually) at ISC 2020, DAOS took the top spot on the IO500 any size configuration list and performed better than 2X the next best solution. And on the IO500 10 node list, Intel’s DAOS configuration, Texas Advanced Computing (TAC) DAOS configuration, and Argonne Nat Labs DAOS configuration took the top 3 spots and had 3X better performance than the next best, non-DAOS storage system.

The Argonne National Labs has already stated that they will be using DAOS in their new HPC system to be deployed in the near future. Early specifications for storage at the new Argonne Lab required support for 230PB of data and 25TB/sec of bandwidth.

The podcast ran ~43 minutes. Kelsey was great to talk with and very knowledgeable about HPC systems and HPC IO in particular. Matt has worked at Argonne in the past so understood these systems better than I. Sadly, we lost Matt’s end of the conversation about 1/2 way into the recording. Both Matt and I thought that DAOS represents the birth of a new generation of HPC storage. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png

This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png

Kelsey Prantis, Senior Software Engineering Manager, Intel

 Kelsey Prantis heads the Extreme Storage Architecture and Development division at Intel Corporation. She leads the development of Distributed Asynchronous Object Storage (DAOS), an open-source, low-latency and high IOPS object store designed from the ground up for massively distributed Non-Volatile Memory (NVM).

She joined Intel in 2012 with the acquisition of Whamcloud, where she led the development of the Intel Manager for Lustre* product.

Prior to Whamcloud, she was a software developer at personal genomics and biotechnology company 23andMe.

Prantis holds a Bachelor’s degree in Computer Science from Rochester Institute of Technology

098: GreyBeards talk data protection & visualization for massive unstructured data repositories with Christian Smith, VP Product at Igneous

Sponsored By:

Even before COVID-19 there was a lot of file data being created and mined, but with the advent of the pandemic, this has accelerated considerably. As such, it seemed an appropriate time to talk with Christian Smith, VP of Product at Igneous, (@IgneousIO) a company that targets the protection and visibility of massive quantities of unstructured data, on premise, in the cloud, or just about anywhere else it may live.

Let me state at the outset, that my belief had always been, that you don’t backup 10PB of data, rather you bite the (big expense) bullet to replicate it and hope for the best. After talking with Christian and Igneous I am going to have to modify that belief by a couple of more orders of magnitude.

All this data is coming from: LIDAR, RADAR, audio, video, pictures, medical film, MRI/CAT Scans, etc., and as noted above, it’s exploding. Christian talked about one customer of theirs that supplies aerial photography/LIDAR/RADAR scans of areas on request. This can used to better understand crop, forest, wildlife, land health and use. One surprise Igneous found with this customer is that the data is typically archived after first use, but within a month or so it’s moved back online for some other purpose.

Igneous heritage

Many of the people who started up and currently work at Igneous have been around file storage for some time having, primarily coming from (Dell EMC) Isilon, NetApp, Qumulo and other industry heavyweights. When they started Igneous, they realized the world didn’t need another NAS box or file system. Rather, with the advent of 10-100PB unstructured data farms, what was needed was an effective way to protect and understand that data.

When they considered how to protect and visualize 100PB of unstructured data, the only they found to do this was to build a scale-out solution that used on premise and cloud infrastructure and was offered as a service.

Igneous DataProtect solution

With 10PB or 100PB of files, located across a gaggle of heterogeneous file servers, with billions of files across ~100s of servers, each of with has ~1K or more file shares, just scanning all the file servers would take weeks, if not longer and then you need to move the data someplace to protect it. Seems like an impossible task.

Igneous immediately figured out the first thing they needed was a radically new, scale out architecture to rapidly scan of the file servers. Thus was born ActiveScan. Christian said it was designed to scan a trillion files and they have customers with a billion files using their service today. ActiveScan doesn’t use NFS/SMB/Object (S3) access protocols to talk with file servers rather it uses internal APIs to access file metadata. DataProtect currently supports APIs for NetApp, Dell EMC Isilon, Pure FlashBlade, Qumulo, Gluster, Lustre, & GPFS (IBM Spectrum Scale) file systems. They use ActiveScan to build a file index database.

Their other major concern was hot to move PBs of data rapidly across to the cloud and other locations. Again they created a scale out, multi-threaded service to do this and also made use of internal APIs rather than standard file or object protocols. This became IntelliMove. That same customer above with billions of files, has 6PB of file data to protect.

Normal data movement is fine for largish, files but bogs down with lots of small files or extremely large files to back up. DataProtect gathers together small files into a large chunks and splits up extremely large files into smaller chunks and moves these chunks to secondary storage.

Data expiration is another problem, especially when you chunk files together. Here they came up with an intelligent garbage collection algorithm which only collects free space when it makes the most sense but deletes data access at the time of expiration.

DataProtect uses a cloud based, SaaS control plane that manages and coordinates its activities across data centers, sites and cloud instances. It also has a client VM (OVA, with 8 core CPU, 32GB DRAM, ~100MB) that runs in the customers infrastructure, on site, in CoLo’s or in the cloud that is used to scan-move-protect customer unstructured data. If more scan and data movement performance is needed, the VM can spawn additional threads automatically and more VMs can be added to provide even more throughput.

DataDiscover solution

The other service that Igneous offers is DataDiscover a data visualization tool. DataDiscover uses ActiveScan and its database to provide customers a way to understand the file data that resides in their massive unstructured data farms across the data center, cloud or wherever else it resides.

We didn’t discuss this solution as much but having a way to better understand the files in a 10-100PB unstructured data farm could be very useful and a great way to keep that 100PB from growing to 1EB faster than it has too.

As part of their outreach to the world, Igneous is giving away free DataProtect services to organizations that are focused on COVID-19 research. Check out their offer here

The podcast ran ~24 minutes. Christian was extremely knowledgeable about the problems that happen with very large unstructured data farms and how Igneous solutions can provide a better way to protect and visualize that data. Matt and I had a fun time discussing Igneous’s approach with Christian. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Christian Smith, VP Product at Igneous

Christian is VP of Product, responsible for product management, solutions, and customer success. Prior to Igneous, Christian spent 15 years running field engineering organizations at EMC, Isilon Systems, NetApp and Silicon Graphics.

Christian has been working with organizations that work with file data since working at Silicon Graphics. Before that Christian was co-founder of a small management consulting company associated with Y2K and deregulation.

Christian received dual bachelor’s degrees in Chemistry and Computer Science from the University of Missouri-Columbia. Christian is an avid camper, skier and traveler and has long since traveled through all of the continental 48 states.

91: Keith and Ray show at CommvaultGO 2019

There was a lot of news at CommvaultGO this year and it was our first chance to talk with their new CEO, Sanjay Mirchandani. Just prior to the show Commvault introduced new SaaS backup offering for the mid market, Metallic™ and about a month or so prior to the show Commvault had acquired Hedvig, a software defined storage solution. Keith and I also participated in a TechFieldDay Exclusive (TFDx) for Commvault, the day before the show began.

First up is Metallic, a Commvault Venture. When Sanjay arrived he took a worldwide tour of Commvault offices and customers and came back saying they needed a Software-as-a-Service backup offering to go after the mid market. That was about 6 months ago and since then, they have spun up a development and marketing team and today delivered their first product.

Metallic has three offerings all based on Commvault technology but re-implemented to be simpler to use and operate in the cloud.

  1. Metallic Core Backup & Recovery which is targeted at virtualized server environments whether on premises or in the cloud. It covers backup and recovery for VMware vSphere, Microsoft Hyper-V & KVM VMs, SQL server and file servers running on Windows or Linux.
  2. Metallic Office 365 Backup & Recovery, which is targeted at Office 365 solutions and provides backup and recovery solutions for these customer environments.
  3. Metallic Endpoint Backup & Recovery, which is focused on desktop and laptop users and provides backup and recovery for those end-user environments.

Metallic operates in it’s own cloud environment (believed to be Microsoft Azure) and it’s a bring your own cloud secondary storage solution with an option to use Metallic cloud storage as secondary storage.

At the moment, Metallic is only offered to US based organizations and purchased through Commvault channel partners. However, the free (believe 45 day) trial can be downloaded and purchased without the channel.

Pricing for the Core Backup & Recovery is based on TB/month and pricing for the other two Metallic offerings is based on user seats/month. There doesn’t seem to be any retention limit for the Office365 and Endpoint products. The Core Backup product data retention is only limited by the TBs that are licensed.

Next up is Commvault Activate™. This product was announced at last years GO conference but neither Keith or I took note. Activate is data management solution using Commvault backup storage and provides three capabilities, File storage Optimization, which identifies files that are suitable for archive; Sensitive Data Governance, which profiles and id’s sensitive data in files and provides governance; and Compliance Search & eDiscovery, which can be used to put legal holds and create review sets for legal and other compliance activities.

And then there’s Hedvig, a Commvault Venture. At the show there was much talk about the Data Brain as having two sides, one was for the management of data protection and the other was for the management of storage. What Commvault plans to do over the next few years is to deliver on a unified storage and protection Data Brain that supports both of these sides. During the TFD sessions there was quite a lot of chatter, twitter and otherwise about whether customers would ever be willing to have both primary and secondary storage on the same system, or be have both be controlled by the same data plane. Commvault isn’t the only vendor to have gone down this path. We will need to wait and see how customers react.

The podcast is ~23 minutes. As mentioned previously, Keith is a long time friend and co-host of our GreyBeards On Storage podcast. He always has an interesting perspective on how new technology can benefit the data center today. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Keith Townsend, The CTO Advisor

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations.

Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.