90: GreyBeards talk K8s containers storage with Michael Ferranti, VP Product Marketing, Portworx

At VMworld2019 USA there was a lot of talk about integrating Kubernetes (K8s) into vSphere’s execution stack and operational model. We had heard that Portworx was a leader in K8s storage services or persistent volume support and thought it might be instructive to hear from Michael Ferranti (@ferrantiM), VP of Product Marketing at Portworx about just what they do for K8s container apps and their need for state information.

Early on Michael worked for RackSpace in their SaaS team and over time saw how developers and system engineers just loved container apps. But they had great difficulty using them for mission critical applications and containers of the time had a complete lack of support for storage. Michael joined Portworx to help address these and other limitations in using containers for mission critical workloads.

Portworx is essentially a SAN, specifically designed for containers. It’s a software defined storage system that creates a cluster of storage nodes across K8s clusters and provides standard storage services on a container level granularity.

As a software defined storage system, Portworx is right in the middle of the data path, storage they must provide high availability, RAID protection and other standard storage system capabilities. But we talked only a little about basic storage functionality on the podcast.

Portworx was designed from the start to work for containers, so it can easily handle provisioning and de-provisioning, 100s to 1000s of volumes without breaking a sweat. Not many storage systems, software defined or not, can handle this level of operations and not impact storage services.

Portworx supports both synchronous and asynchronous (snapshot based) replication solutions. As all synchronous replication, system write performance is dependent on how far apart the storage nodes are, but it can provide RPO=0 (recovery point objective) for mission critical container applications.

Portworx takes this another step beyond just data replication. They also replicate container configuration (YAML) files. We’re no experts but YAML files contain an encapsulation of everything needed to understand how to run containers and container apps in a K8s cluster. When one combines replicated container YAML files, replicated persistent volume data AND an appropriate external registry, one can start running your mission critical container apps at a disaster site in minutes.

Their asynchronous replication for container data and configuration files, uses Portworx snapshots , which are sent to an alternate site. But they also support asynch replication to any S3 compatible storage via CloudSnap.

Portworx also supports KubeMotion, which replicates/copies name spaces, container app volume data and container configuration YAML files from one K8s cluster to another. This way customers can move their K8s namespaces and container apps to any other Portworx K8s cluster site. This works across on prem K8s clusters, cloud K8s clusters, between public cloud provider K8s clusters s or between on prem and cloud K8s clusters.

Michael also mentioned that data at rest encryption, for Portworx, is merely a tick box on a storage class specification in the container’s YAML file. They make use use of KMIP services to provide customer generated keys for encryption.

This is all offered as part of their Data Security/Disaster Recovery (DSDR) service. that supports any K8s cluster service whether they be AWS, Azure, GCP, OpenShift, bare metal, or VMware vSphere running K8s VMs.

Like any software defined storage system, customers needing more performance can add nodes to the Portworx (and K8s) cluster or more/faster storage to speed up IO

It appears they have most if not all the standard storage system capabilities covered but their main differentiator, besides container app DR, is that they support volumes on a container by container basis. Unlike other storage systems that tend to use a VM or higher level of granularity to contain container state information, with Portworx, each persistent volume in use by a container is mapped to a provisioned volume.

Michael said their focus from the start was to provide high performing, resilient and secure storage for container apps. They ended up with a K8s native storage and backup/DR solution to support mission critical container apps running at scale. Licensing for Portworx is on a per host (K8s node basis).

The podcast ran long, ~48 minutes. Michael was easy to talk with, knew K8s and their technology/market very well. Matt and I had a good time discussing K8s and Portworx’s unique features made for K8s container apps. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Michael Ferranti, VP of Product Marketing, Portworx

Michael (@ferrantiM) is VP of Product Marketing at Portworx, where he is responsible for communicating the value of containerization and digital transformation to global architects and CIOs.

Prior to joining Portworx, Michael was VP of Marketing at ClusterHQ, an early leader in the container storage market and spent five years at Rackspace in a variety of product and marketing roles

84: GreyBeards talk ultra-secure NAS with Eric Bednash, CEO & Co-founder, RackTop Systems

We were at a recent vendor conference where Steve Foskett (@SFoskett) introduced us to Eric Bednash (@ericbednash), CEO & Co-Founder, RackTop Systems. They have taken ZFS and made it run as a ultra-secure NAS system. Matt Leib, my co-host for this episode, has on-the-job experience with ZFS and was a great co-host for this episode.

It turns out that Eric and his CTO (perhaps other RackTop employees) have extensive experience with intelligence and other government agencies that depend on data security. These agencies deal with cyber security threats an order of magnitude larger, than what corporations see .

All that time in intelligence gave Eric a unique perspective on what it takes to build secure, bullet proof NAS systems. Nine years or so ago, he and his CTO, took OpenZFS (and OpenSolaris) and used it as the foundation for their new highly available and ultra-secure NAS system.

Most storage systems support user access data protection based on authorization. If a user is authorized to see/write data, they have unrestricted access to the data. Perhaps if an organization is paranoid, they might also use data at rest encryption. But RackTop takes all this to a whole other level.

Data security to the Nth degree

RackTop offers dual encryption for data at rest. Most organizations would say single encryption’s enough. The data’s encrypted, how will another level of encryption make it more secure.

It all depends on how one secures keys (and just my thoughts here, maybe how easily quantum computing can decrypt singly encrypted data). So RackTop systems uses self encrypting drives (1st level of encryption) as well as software encryption (2nd level of encryption). Each having their own unique keys RackTop can maintain either in their own system or in a KMIP service provided by the data center.

They also supply user profiling. User data access can be profiled with a dataset heat map and other statistical/logging information. When users go outside their usual access profiles, it may signal a security breach. At the moment, when this happens RackTop notifies security administrators, but Eric mentioned a future release will have the option to automatically shut that user down.

And with all the focus on GDPR and similar regulations coming to a state near you, having user access profiles and access logs can easily satisfy any regulatory auditing requirements.

Eric said that any effective security has to be multi-layered. With RackTop, their multi-layer approach goes way beyond just data-at-rest encryption and user access authentication. RackTop also offers their appliance hardware sourced from secure supply chains and manufactured inside secured facilities. They have also modified OpenSolaris to be more secure and hardened it and its OS against cyber threat.

RackTop even supports cloud tiering with an internally developed secure data mover. Their data mover can securely migrate data (retaining meta-data on their system) to any S3 compatible object storage.

As proof of the security available from a RackTop NAS system, an unnamed US government agency had a “red-team” attack their storage. Although Eric shared only a few details on what the red-team attempted, he did say RackTop NAS survived the assualt without security breach.

He also mentioned that they are trying to create a Zero Trust storage environment. Zero Trust implies constant verification and authentication. Rather like going beyond one time entered login credentials and making users re-authenticate every time they access data. Eric didn’t say when, if ever they’d reach this level of security but it’s a clear indication of a direction for their products.

ZFS based NAS system

A RackTop NAS supplies a ZFS-based file system. As such, it inheritnall the features and advanced functionality of OpenZFS but within a more secured, hardened and highly available storage system

ZFS has historically had issues with usability and its multiplicity of tuning knobs. RackTop has worked hard to make ZFS easier to operate and removed much of the manual tuning required to make it perform well.

The podcast is a long and runs over ~44 minutes. We spent most of our time talking about security and less on the storage functionality of RackTop NAS. The security of RackTop systems takes some getting used to but the need exists today and not many storage systems are implementing security quite to their level. Much of what RackTop does to improve data security blew Matt and I away. Eric is a very smart security expert in addition to being a storage vendor CEO. Listen to the podcast to learn more.

Eric Bednash, CEO & Co-founder, RackTop Systems

Eric Bednash is the co-founder and CEO of RackTop Systems, the pioneer of CyberConvergedTM data security, a new market that fuses data storage with advanced security and compliance into a single platform.   

A serial entrepreneur and innovator, Bednash has more than 20 years of experience in solving the most complex and challenging data problems through designing products and solutions for the U.S. Intelligence Community and commercial enterprises.

Bednash co-founded RackTop in 2010 with partner and current CTO Jonathan Halstuch. Prior to co-founding RackTop, he served as co-founder and CTO of a mid-sized consulting firm, focused on developing mission data systems within the Department of Defense and U.S. intelligence communities.

Bednash started his professional career in data center systems at Time-Warner, and spent the better part of the dot-com boom in the Washington, D.C. area connecting businesses to the internet. His career path began while still in high school, where Bednash’s contracted with small businesses and individuals to write software and build computers. 

Bednash attended Rochester Institute of Technology and Penn State University, and completed both undergrad and graduate coursework in Business and Technology Management at Stevenson University. A Forbes Technology Council member, he regularly hosts thought leadership & technology video blogs, and is a technology writer and speaker. He is a multi-instrument musician, recreational athlete and a die-hard Pittsburgh Steelers fan. He currently resides in Fulton, Md. with his wife Laura and two children

80: Greybeards talk composable infrastructure with Tom Lyon, Co-Founder/Chief Scientist and Brian Pawlowski, CTO, DriveScale

We haven’t talked with Tom Lyon (@aka_pugs) or Brian Pawlowski before on our show but both Howard and I know Brian from his prior employers. Tom and Brian work for DriveScale, a composable infrastructure software supplier.

There’s been a lot of press lately on NVMeoF and the GreyBeards thought it would be good time to hear from another way to supply DAS like performance and functionality. Tom and Brian have been around long enough to qualify as greybeards in their own right.

The GreyBeards have heard of composable infrastructure before but this was based on PCIe switching hardware and limited to a rack or less of hardware. DriveScale is working with large enterprises and their data center’s full of hardware.

Composable infrastructure has many definitions but the one DriveScale probably prefers is that it manages resource pools of servers and storage, that can be combined, per request, to create any mix of servers and DAS storage needed by an application running in a data center. DriveScale is targeting organizations that have from 1K to 10K servers with from 10K to 100K disk drives/SSDs.

Composable infrastructure for large enterprises

DriveScale provides large data centers the flexibility to better support workloads and applications that change over time. That is, these customers may, at one moment, be doing big data analytics on PBs of data using Hadoop, and the next, MongoDB or other advanced solution to further process the data generated by Hadoop.

In these environments, having standard servers with embedded DAS infrastructure may be overkill and will cost too much. For example., because one has no way to reconfigure (1000) server’s storage for each application that comes along, without exerting lots of person-power, enterprises typically over provision storage for those servers, which leads to higher expense.

But if one had some software that could configure 1 logical server or a 10,000 logical servers, with the computational resources, DAS disk/SSDs, or NVMe SSDs needed to support a specific application, then enterprises could reduce their server and storage expense while at the same time provide applications with all the necessary hardware resources.

When that application completes, all those hardware resources could be returned back to their respective pools and used to support the next application to be run. It’s probably not that useful when an enterprise only runs one application at a time, but when you have 3 or more running at any instant, then composable infrastructure can reduce hardware expenses considerably.

DriveScale composable infrastructure

DriveScale is a software solution that manages three types of resources: servers, disk drives, and SSDs over high speed Ethernet networking. SAS disk drives and SAS SSDs are managed in an EBoD/EBoF (Ethernet (iSCSI to SAS) bridge box) and NVMe SSDs are managed using JBoFs and NVMeoF/RoCE.

DriveScale uses standard (RDMA enabled) Ethernet networking to compose servers and storage to provide DAS like/NVMe like levels of response times.

DriveScale’s composer orchestrator self-discovers all hardware resources in a data center that it can manage. It uses an API to compose logical servers from server, disk and SSD resources under its control available, throughout the data center.

Using Ethernet switching any storage resource (SAS disk, SAS SSD or NVMe SSD) can be connected to any server operating in the data center and be used to run any application.

There’s a lot more to DriveScale software. They don’t sell hardware. but have a number of system integrators (like Dell) that sell their own hardware and supply DriveScale software to run a data center.

The podcast runs ~44 minutes. The GreyBeards could have talked with Tom and Brian for hours and Brian’s very funny. They were extremely knowledgeable and have been around the IT industry almost since the beginning of time. They certainly changed the definition of composable infrastructure for both of us, which is hard to do. Listen to the podcast to learn more. .

Tom Lyon, Co-Founder and Chief Scientist

Tom Lyon is a computing systems architect, a serial entrepreneur and a kernel hacker.

Prior to founding DriveScale, Tom was founder and Chief Scientist of Nuova Systems, a start-up that led a new architectural approach to systems and networking. Nuova was acquired in 2008 by Cisco, whose highly successful UCS servers and Nexus switches are based on Nuova’s technology.

He was also founder and CTO of two other technology companies. Netillion, Inc. was an early promoter of memory-over-network technology. At Ipsilon Networks, Tom invented IP Switching. Ipsilon was acquired by Nokia and provided the IP routing technology for many mobile network backbones.

As employee #8 at Sun Microsystems, Tom was there from the beginning, where he contributed to the UNIX kernel, created the SunLink product family, and was one of the NFS and SPARC architects. He started his Silicon Valley career at Amdahl Corp., where he was a software architect responsible for creating Amdahl’s UNIX for mainframes technology.

Brian Pawlowski, CTO

Brian Pawlowski is a distinguished technologist, with more than 35 years of experience in building technologies and leading teams in high-growth environments at global technology companies such as Sun Microsystems, NetApp and Pure Storage.

Before joining DriveScale as CTO, Brian served as vice president and chief architect at Pure Storage, where he focused on improving the user experience for the all-flash storage platform provider’s rapidly growing customer base. He also was CTO at storage pioneer NetApp, which he joined as employee #18.

Brian began his career as a software engineer for a number of well-known technology companies. Early in his days as a technologist, he worked at Sun, where he drove the technical analysis and discussion on alternate file systems technologies. Brian has also served on the board of trustees for the Anita Borg Institute for Women and Technology as well as a member of the board at the Linux Foundation.

Brian studied computer science at Arizona State University, physics at the University of Texas at Austin, as well as physics at MIT.

75: GreyBeards talk persistent memory IO with Andy Grimes, Principal Technologist, NetApp

Sponsored By:  NetApp
In this episode we talk new persistent memory IO technology  with Andy Grimes, Principal Technologist, NetApp. Andy presented at the NetApp Insight 2018 TechFieldDay Extra (TFDx) event (video available here). If you get a chance we encourage you to watch the videos as Andy, did a great job describing their new MAX Data persistent memory IO solution.

The technology for MAX Data came from NetApp’s Plexistor acquisition. Prior to the acquisition, Plexistor had also presented at a SFD9 and TFD11.

Unlike NVMeoF storage systems, MAX Data is not sharing NVMe SSDs across servers. What MAX Data does is supply an application-neutral way to use persistent memory as a new, ultra fast, storage tier together with a backing store.

MAX Data performs a write or an “active” (Persistent Memory Tier) read in single digit µseconds for a single core/single thread server. Their software runs in user space and as such, for multi-core servers, it can take up to 40  µseconds.  Access times for backend storage reads is the same as NetApp AFF but once read, data is automatically promoted to persistent memory, and while there, reads ultra fast.

One of the secrets of MAX Data is that they have completely replaced the Linux Posix File IO stack with their own software. Their software is streamlined and bypasses a lot of the overhead present in today’s Linux File Stack. For example, MAX Data doesn’t support metadata-journaling.

MAX Data works with many different types of (persistent) memory, including DRAM (non-persistent memory), NVDIMMs (DRAM+NAND persistent memory) and Optane DIMMs (Intel 3D Xpoint memory, slated to be GA end of this year). We suspect it would work with anyone else’s persistent memory as soon as they come on the market.

Even though the (Optane and NVDIMM) memory is persistent, server issues can still lead to access loss. In order to provide data availability for server outages, MAX Data also supports MAX Snap and MAX Recovery. 

With MAX Snap, MAX Data will upload all persistent memory data to ONTAP backing storage and ONTAP snapshot it. This way you have a complete version of MAX Data storage that can then be backed up or SnapMirrored to other ONTAP storage.

With MAX Recovery, MAX Data will synchronously replicate persistent memory writes to a secondary MAX Data system. This way, if the primary MAX Data system goes down, you still have an RPO-0 copy of the data on another MAX Data system that can be used to restore the original data, if needed. Synchronous mirroring will add 3-4  µseconds to the access time for writes, quoted above.

Given the extreme performance of MAX Data, it’s opening up whole new set of customers to talking with NetApp. Specifically, high frequency traders (HFT) and high performance computing (HPC). HFT companies are attempting to reduce their stock transactions access time to as fast as humanly possible. HPC vendors have lots of data and processing all of it in a timely manner is almost impossible. Anything that can be done to improve throughput/access times should be very appealing to them.

To configure MAX Data, one uses a 1:25 ratio of persistent memory capacity to backing store. MAX Data also supports multiple LUNs.

MAX Data only operates on Linux OS and supports (IBM) RedHat and CentOS, But Andy said it’s not that difficult to add support for other versions of Linux Distros and customers will dictate which other ones are supported, over time.

As discussed above, MAX Data works with NetApp ONTAP storage, but it also works with SSD/NVMe SSDs as backend storage. In addition, MAX Data has been tested with NetApp HCI (with SolidFire storage, see our prior podcasts on NetApp HCI with Gabriel Chapman and Adam Carter) as well as E-Series storage. The Plexistor application has been already available on AWS Marketplace for use with EC2 DRAM and EBS backing store. It’s not much of a stretch to replace this with MAX Data.

MAX Data is expected to be GA released before the end of the year.

A key ability of the MAX Data solution is that it requires no application changes to use persistent memory for ultra-fast IO. This should help accelerate persistent memory adoption in data centers when the hardware becomes more available. Speaking to that, at Insight2018, Lenovo, Cisco and Intel were all on stage when NetApp announced MAX Data.

The podcast runs ~25 minutes. Andy’s an old storage hand (although no grey beard) and talks the talk, walks the walk of storage religion. Andy is new to TFD but we doubt it will be the last time we see him there. Andy was very conversant on the MAX Data technology and the market that it apparently is opening up for this new technology.  Listen to our podcast to learn more.

Andy Grimes, Principal Technologiest, NetApp

Andy has been in the IT industry for 17 years, working in roles spanning development, technology architecture, strategic outsourcing and Healthcare..

For the past 4 years Andy has worked with NetApp on taking the NetApp Flash business from #5 to #1 in the industry (according to IDC). During this period NetApp also became the fastest growing Flash and SAN vendor in the market and regained leadership in the Gartner quadrant.

Andy also works with NetApp’s product vision, competitive analysis and future technology direction and working with the team bringing the MAX Data PMEM product to market.

Andy has a BS degree in psychology, a BPA in management information systems, and an MBA. He current works as a Principal Technologist for the NetApp Cloud Infrastructure Business Unit with a focus on PMEM, HCI and Cloud Strategy. Andy lives in Apex, NC with his beautiful wife and has 2 children, a 4 year old and a 22 year old (yes don’t let this happen to you). For fun Andy likes to Mountain Bike, Rock Climb, Hike and Scuba Dive.

73: GreyBeards talk HCI with Gabriel Chapman, Sr. Mgr. Cloud Infrastructure NetApp

Sponsored by: NetApp

In this episode we talk HCI  with Gabriel Chapman (@Bacon_Is_King), Senior Manager, Cloud Infrastructure, NetApp. Gabriel presented at the NetApp Insight 2018 TechFieldDay Extra (TFDx) event (video available here). Gabriel also presented last year at the VMworld 2017 TFDx event (video available here). If you get a chance we encourage you to watch the videos as Gabriel, did a great job providing some design intent and descriptions of NetApp HCI capabilities. Our podcast was recorded after the TFDx event.

NetApp HCI consists of NetApp Solidfire storage re-configured, as a small enterprise class AFA storage node occupying one blade of a four blade system, where the other three blades are dedicated compute servers. NetApp HCI runs VMware vSphere but uses enterprise class iSCSI storage supplied by the NetApp SolidFire AFA.

On our podcast, we talked a bit about SolidFire storage. It’s not well known but the 1st few releases of SolidFire (before NetApp acquisition) didn’t have a GUI and was entirely dependent on its API/CLI for operations. That heritage continues today as NetApp HCI management console is basically a front end GUI for NetApp HCI API calls.

Another advantage of SolidFire storage was it’s extensive QoS support which included state of the art service credits as well as service limits.  All that QoS sophistication is also available in NetApp HCI, so that customers can more effectively limit noisy neighbor interference on HCI storage.

Although NetApp HCI runs VMware vSphere as its preferred hypervisor, it’s also possible to run other hypervisors in bare metal clusters with NetApp HCI storage and compute servers. In contrast to other HCI solutions, with NetApp HCI, customers can run different hypervisors, all at the same time, sharing access to NetApp HCI storage.

On our podcast and the Insight TFDx talk, Gabriel mentioned some future deliveries and roadmap items such as:

  • Extending NetApp HCI hardware with a new low-end, 2U configuration designed specifically for RoBo and SMB customers;.
  • Adding NetApp Cloud Volume support so that customers can extend their data fabric out to NetApp HCI; and
  • Adding (NFS) file services support so that customers using NFS data stores /VVols could take advantage of NetApp HCI storage.

Another thing we discussed was the new development HCI cadence. In the past they typically delivered new functionality about 1/year. But with the new development cycle,  they’re able to deliver functionality much faster but have settled onto a 2 releases/year cycle, which seems about as quickly as their customer base can adopt new functionality.

The podcast runs ~22 minutes. We apologize for any quality issues with the audio. It was recorded at the show and we were novices with the onsite recording technology. We promise to do better in the future. Gabriel has almost become a TFDx regular these days and provides a lot of insight on both NetApp HCI and SolidFire storage.  Listen to our podcast to learn more.

Gabriel Chapman, Senior Manager, Cloud Infrastructure, NetApp

Gabriel is the Senior Manager for NetApp HCI Go to Market. Today he is mainly engaged with NetApp’s top tier customers and partners with a primary focus on Hyper Converged Infrastructure for the Next Generation Data Center.

As a 7 time vExpert that transitioned into the vendor side after spending 15 years working in the end user Information Technology arena, Gabriel specializes in storage and virtualization technologies. Today his primary area of expertise revolves around storage, data center virtualization, hyper-converged infrastructure, rack scale/hyper scale computing, cloud, DevOps, and enterprise infrastructure design.

Gabriel is a Prime Mover, Technologist, Unapologetic Randian, Social Media Junky, Writer, Bacon Lover, and Deep Thinker, whose goal is to speak truth on technology and make complex ideas sound simple. In his free time, Gabriel is the host of the In Tech We Trust podcast and enjoys blogging as well as public speaking.

Prior to joining SolidFire, Gabriel was a storage technologies specialist covering the United States with Cisco, focused on the Global Service Provider customer base. Before Cisco, he was part of the go-to-market team at SimpliVity, where he concentrated on crafting the customer facing messaging, pre-sales engagement, and evangelism efforts for the early adopters of Hyper Converged Infrastructure.