90: GreyBeards talk K8s containers storage with Michael Ferranti, VP Product Marketing, Portworx

At VMworld2019 USA there was a lot of talk about integrating Kubernetes (K8s) into vSphere’s execution stack and operational model. We had heard that Portworx was a leader in K8s storage services or persistent volume support and thought it might be instructive to hear from Michael Ferranti (@ferrantiM), VP of Product Marketing at Portworx about just what they do for K8s container apps and their need for state information.

Early on Michael worked for RackSpace in their SaaS team and over time saw how developers and system engineers just loved container apps. But they had great difficulty using them for mission critical applications and containers of the time had a complete lack of support for storage. Michael joined Portworx to help address these and other limitations in using containers for mission critical workloads.

Portworx is essentially a SAN, specifically designed for containers. It’s a software defined storage system that creates a cluster of storage nodes across K8s clusters and provides standard storage services on a container level granularity.

As a software defined storage system, Portworx is right in the middle of the data path, storage they must provide high availability, RAID protection and other standard storage system capabilities. But we talked only a little about basic storage functionality on the podcast.

Portworx was designed from the start to work for containers, so it can easily handle provisioning and de-provisioning, 100s to 1000s of volumes without breaking a sweat. Not many storage systems, software defined or not, can handle this level of operations and not impact storage services.

Portworx supports both synchronous and asynchronous (snapshot based) replication solutions. As all synchronous replication, system write performance is dependent on how far apart the storage nodes are, but it can provide RPO=0 (recovery point objective) for mission critical container applications.

Portworx takes this another step beyond just data replication. They also replicate container configuration (YAML) files. We’re no experts but YAML files contain an encapsulation of everything needed to understand how to run containers and container apps in a K8s cluster. When one combines replicated container YAML files, replicated persistent volume data AND an appropriate external registry, one can start running your mission critical container apps at a disaster site in minutes.

Their asynchronous replication for container data and configuration files, uses Portworx snapshots , which are sent to an alternate site. But they also support asynch replication to any S3 compatible storage via CloudSnap.

Portworx also supports KubeMotion, which replicates/copies name spaces, container app volume data and container configuration YAML files from one K8s cluster to another. This way customers can move their K8s namespaces and container apps to any other Portworx K8s cluster site. This works across on prem K8s clusters, cloud K8s clusters, between public cloud provider K8s clusters s or between on prem and cloud K8s clusters.

Michael also mentioned that data at rest encryption, for Portworx, is merely a tick box on a storage class specification in the container’s YAML file. They make use use of KMIP services to provide customer generated keys for encryption.

This is all offered as part of their Data Security/Disaster Recovery (DSDR) service. that supports any K8s cluster service whether they be AWS, Azure, GCP, OpenShift, bare metal, or VMware vSphere running K8s VMs.

Like any software defined storage system, customers needing more performance can add nodes to the Portworx (and K8s) cluster or more/faster storage to speed up IO

It appears they have most if not all the standard storage system capabilities covered but their main differentiator, besides container app DR, is that they support volumes on a container by container basis. Unlike other storage systems that tend to use a VM or higher level of granularity to contain container state information, with Portworx, each persistent volume in use by a container is mapped to a provisioned volume.

Michael said their focus from the start was to provide high performing, resilient and secure storage for container apps. They ended up with a K8s native storage and backup/DR solution to support mission critical container apps running at scale. Licensing for Portworx is on a per host (K8s node basis).

The podcast ran long, ~48 minutes. Michael was easy to talk with, knew K8s and their technology/market very well. Matt and I had a good time discussing K8s and Portworx’s unique features made for K8s container apps. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Michael Ferranti, VP of Product Marketing, Portworx

Michael (@ferrantiM) is VP of Product Marketing at Portworx, where he is responsible for communicating the value of containerization and digital transformation to global architects and CIOs.

Prior to joining Portworx, Michael was VP of Marketing at ClusterHQ, an early leader in the container storage market and spent five years at Rackspace in a variety of product and marketing roles

87: Matt & Ray show at VMworld 2109

Matt and Ray were both at VMworld 2019 in San Francisco this past week, and we did an impromptu podcast on recent news at the show.

VMware announced a number of new projects and just prior to the show they announced the intent to acquire Pivotal and Carbon Black. Pat’s keynote the first day was about a number of new products and features but he also spent time discussing how they were going to incorporate these acquisitions.

One thing that caught a lot of attention was “The Tanzu Portfolio”, which was all about how VMware is adopting Kubernetes as an integral and native part of vSphere moving forward. Project Pacific was their working name for integrating Kubernetes as a native feature of vSphere. And the Tanzu Mission Control was a new multi-cloud/hybrid cloud management solution for Kubernetes clusters wherever they ran.

VMware has had a rather lengthy history with container support from project Photon, to VIC, to running PKS ontop of vSphere. But with Project Pacific, Kubernetes is now being brought under the covers of vSphere and any ESXi cluster becomes a .Kubernetes cluster.

We also talked a little bit about Carbon Black and it’s endpoint security. Neither of us are security experts but Matt mentioned another company he talked with at the show that based their product on workload profiling to determine when something has gone amiss.

It’s Ray’s belief that Carbon Black does much the same profilings only for endpoint devices desktops, laptops, and mobile devices (maybe not thin clients).

Pat also talked a bit about IoT and edge processing at the show and they have a push to support more forms of edge computing.

Ray mentioned he talked with HiveCell, at the show who had a standalone Arm server about the size of a big book that can be stood up just about anywhere there’s power and ethernet.

Unfortunately there’s some background noise on the podcast and it happens to be a short one, at over 16.5 minutes. This podcast represents a departure for us, as the Greybeards have never done a live recording at a conference before. We plan to do more of this so we hope you enjoy it. Please let us know what you think about it and if there’s anything we could do to improve our live recording shows. There’s more on the recording so listen to the podcast to learn more.

Matt Leib

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing.. His blog is at Virtually Tied to My Desktop and he’s on LinkedIN.

84: GreyBeards talk ultra-secure NAS with Eric Bednash, CEO & Co-founder, RackTop Systems

We were at a recent vendor conference where Steve Foskett (@SFoskett) introduced us to Eric Bednash (@ericbednash), CEO & Co-Founder, RackTop Systems. They have taken ZFS and made it run as a ultra-secure NAS system. Matt Leib, my co-host for this episode, has on-the-job experience with ZFS and was a great co-host for this episode.

It turns out that Eric and his CTO (perhaps other RackTop employees) have extensive experience with intelligence and other government agencies that depend on data security. These agencies deal with cyber security threats an order of magnitude larger, than what corporations see .

All that time in intelligence gave Eric a unique perspective on what it takes to build secure, bullet proof NAS systems. Nine years or so ago, he and his CTO, took OpenZFS (and OpenSolaris) and used it as the foundation for their new highly available and ultra-secure NAS system.

Most storage systems support user access data protection based on authorization. If a user is authorized to see/write data, they have unrestricted access to the data. Perhaps if an organization is paranoid, they might also use data at rest encryption. But RackTop takes all this to a whole other level.

Data security to the Nth degree

RackTop offers dual encryption for data at rest. Most organizations would say single encryption’s enough. The data’s encrypted, how will another level of encryption make it more secure.

It all depends on how one secures keys (and just my thoughts here, maybe how easily quantum computing can decrypt singly encrypted data). So RackTop systems uses self encrypting drives (1st level of encryption) as well as software encryption (2nd level of encryption). Each having their own unique keys RackTop can maintain either in their own system or in a KMIP service provided by the data center.

They also supply user profiling. User data access can be profiled with a dataset heat map and other statistical/logging information. When users go outside their usual access profiles, it may signal a security breach. At the moment, when this happens RackTop notifies security administrators, but Eric mentioned a future release will have the option to automatically shut that user down.

And with all the focus on GDPR and similar regulations coming to a state near you, having user access profiles and access logs can easily satisfy any regulatory auditing requirements.

Eric said that any effective security has to be multi-layered. With RackTop, their multi-layer approach goes way beyond just data-at-rest encryption and user access authentication. RackTop also offers their appliance hardware sourced from secure supply chains and manufactured inside secured facilities. They have also modified OpenSolaris to be more secure and hardened it and its OS against cyber threat.

RackTop even supports cloud tiering with an internally developed secure data mover. Their data mover can securely migrate data (retaining meta-data on their system) to any S3 compatible object storage.

As proof of the security available from a RackTop NAS system, an unnamed US government agency had a “red-team” attack their storage. Although Eric shared only a few details on what the red-team attempted, he did say RackTop NAS survived the assualt without security breach.

He also mentioned that they are trying to create a Zero Trust storage environment. Zero Trust implies constant verification and authentication. Rather like going beyond one time entered login credentials and making users re-authenticate every time they access data. Eric didn’t say when, if ever they’d reach this level of security but it’s a clear indication of a direction for their products.

ZFS based NAS system

A RackTop NAS supplies a ZFS-based file system. As such, it inheritnall the features and advanced functionality of OpenZFS but within a more secured, hardened and highly available storage system

ZFS has historically had issues with usability and its multiplicity of tuning knobs. RackTop has worked hard to make ZFS easier to operate and removed much of the manual tuning required to make it perform well.

The podcast is a long and runs over ~44 minutes. We spent most of our time talking about security and less on the storage functionality of RackTop NAS. The security of RackTop systems takes some getting used to but the need exists today and not many storage systems are implementing security quite to their level. Much of what RackTop does to improve data security blew Matt and I away. Eric is a very smart security expert in addition to being a storage vendor CEO. Listen to the podcast to learn more.

Eric Bednash, CEO & Co-founder, RackTop Systems

Eric Bednash is the co-founder and CEO of RackTop Systems, the pioneer of CyberConvergedTM data security, a new market that fuses data storage with advanced security and compliance into a single platform.   

A serial entrepreneur and innovator, Bednash has more than 20 years of experience in solving the most complex and challenging data problems through designing products and solutions for the U.S. Intelligence Community and commercial enterprises.

Bednash co-founded RackTop in 2010 with partner and current CTO Jonathan Halstuch. Prior to co-founding RackTop, he served as co-founder and CTO of a mid-sized consulting firm, focused on developing mission data systems within the Department of Defense and U.S. intelligence communities.

Bednash started his professional career in data center systems at Time-Warner, and spent the better part of the dot-com boom in the Washington, D.C. area connecting businesses to the internet. His career path began while still in high school, where Bednash’s contracted with small businesses and individuals to write software and build computers. 

Bednash attended Rochester Institute of Technology and Penn State University, and completed both undergrad and graduate coursework in Business and Technology Management at Stevenson University. A Forbes Technology Council member, he regularly hosts thought leadership & technology video blogs, and is a technology writer and speaker. He is a multi-instrument musician, recreational athlete and a die-hard Pittsburgh Steelers fan. He currently resides in Fulton, Md. with his wife Laura and two children

69: GreyBeards talk HCI with Lee Caswell, VP Products, Storage & Availability, VMware

Sponsored by:

For this episode we preview VMworld by talking with Lee Caswell (@LeeCaswell), Vice President of Product, Storage and Availability, VMware.

This is the third time Lee’s been on our show, the previous one was back in August of last year. Lee’s been at VMware for a couple of years now and, among other things, is leading the HCI journey at VMware.

The first topic we discussed was VMware’s expanded HCI software defined data center (SDDC) solution, which now includes compute, storage, networking and enhanced operations with alerts/monitoring/automation that ties it all together.

We asked Lee to explain VMware’s SDDC:

  • HCI operates at the edge – with ROBO-2-server environments, VMware’s HCI can be deployed in a closet and remotely operated by a VI from the central site.
  • HCI operates in the data center – with vSphere-vSAN-NSX-vRealize and other software, VMware modernizes data centers for the  pace of digital business..
  • HCI operates in the public Cloud –with VMware Cloud (VMC)  on AWS, IBM Cloud and over 400 service providers, VMware HCI also operates in the public cloud.
  • HCI operates for containers and cloud native apps – with support for containers under vSphere, vSAN and NSX, developers are finding VMware HCI an easy option to run container apps in the data center, at the edge, and in the public cloud.

The importance of the edge will become inescapable, as 50B edge connected devices power IoT by 2020. Lee heard Pat saying compute processing is moving to the edge because of 3 laws:

  1. the law of physics, light/information only travels so fast;
  2. the law of economics, doing all processing at central sites would take too much bandwidth and cost; and
  3. the law(s) of the land, data sovereignty and control is ever more critical in today’s world.

VMware SDDC is a full stack option, that executes just about anywhere the data center wants to go. Howard mentioned one customer he talked with at FMS18, just wanted to take their 16 node VMware HCI rack and clone it forever, to supply infinite infrastructure.

Next, we turned our discussion to Virtual Volumes (VVols). Recently VMware added replication support for VVols. Lee said VMware has an intent to provide a SRM SRA for VVols. But the real question is why hasn’t there been higher field VVol adoption. We concluded it takes time.

VVols wasn’t available in vSphere 5.5 and nowadays, three or more years have to go by before a significant amount of the field moves to a new release. Howard also said early storage systems didn’t implement VVols right. Moreover, VMware vSphere 5.5 is just now (9/16/18) going EoGS.

Lee said 70% of all current vSAN deployments are AFA. With AFA, hand tuning storage performance is no longer something admins need to worry about. It used to be we all spent time defragging/compressing data to squeeze more effective capacity out of storage, but hand capacity optimization like this has become a lost art. Just like capacity, hand tuning AFA performance doesn’t make sense anymore.

We then talked about the coming flash SSD supply glut. Howard sees flash pricing ($/GB) dropping by 40-50%, regardless of interface. This should drive AFA shipments above 70%, as long as the glut continues.

The podcast runs ~21 minutes. Lee’s always great to talk with and is very knowledgeable about the IT industry, HCI in general, and of course, VMware HCI in particular.  Listen to the podcast to learn more.

Lee Caswell, V.P. of Product, Storage & Availability, VMware

Lee Caswell leads the VMware storage marketing team driving vSAN products, partnerships, and integrations. Lee joined VMware in 2016 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Prior to VMware, Lee was vice president of Marketing at NetApp and vice president of Solution Marketing at Fusion-IO. Lee was a founding member of Pivot3, a company widely considered to be the founder of hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at Adaptec, and SEEQ Technology, a pioneer in non-volatile memory. He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

67: GreyBeards talk infrastructure monitoring with James Holden, Sr. Prod. Mgr. NetApp

Sponsored by: Howard and I first talked with James Holden, NetApp Senior Product Manager for OnCommand Insight and Cloud Insights,  last month, at Storage Field Day 16 (SFD16) in Waltham, MA. At the time, we thought it would be great to also have him on the show.

James has been with the NetApp OnCommand Insight (OCI) team for quite awhile now and is very knowledgeable about the product and its technology. NetApp Cloud Insights is a new SaaS offering that provides some of the same services as OCI without the footprint, focused on newer, non-traditional applications and available on a pay as you go model.

NetApp OnCommand Insight (OCI)

NetApp OCI is sort of a stripped down, souped up enterprise SRM tool, without storage and servers configuration-provisioning (see James’s introduction video from SFD15 for more info). It supports NetApp and just about anyone’s storage including Dell EMC, IBM, Hitachi Vantara (HDS), HPE, Infinidat, and Pure Storage as well as most major OSs such as VMware vSphere, Microsoft HyperV, RHEL, etc. Other storage can easily be  added to OCI through a patch/minor update and is typically done by customer request.

NetApp OCI currently runs in some of the biggest enterprises  in the world today, including top F500 companies and one of the world’s largest banks. OCI is agentless but does use a data collector server/VM onprem or in cloud that takes advantage of storage and system APIs to gather data.

OCI provides extensive end-to-end infrastructure monitoring and trouble shooting (see James’s SFD16 OCI monitoring & troubleshooting session). OCI monitors application workloads from VMs to the storage supporting them.

OCI also supplies extensive charge back capabilities (see his SFD16 OCI cost control/chargebacks session). In times like these when IT competes with public cloud offerings every day, charge backs can be very illuminating.

Also, OCI has extensive integration with ServiceNOW and similar offerings (see SFD16 OCI ecosystem session). With this level of integration, OCI can provide seamless tracking of service requests from initiation to completion through verification.

In addition, OCI can monitor public cloud infrastructure as well as onprem. For example, with Amazon Web Services (AWS), customers can use OCI to monitor EC2 instances EBS IO activity. OCI reports on AWS IOPS rates by EC2-EBS connection. Customers paying for EBS IOPS, can use OCI to monitor and tailor their EBS costs. OCI also supports Microsoft Azure environments.

NetApp Cloud Insights

NetApp Cloud Insights, a new SaaS offering, that is currently in Public Preview status but is expected to release in October, 2018 (checkout his SFD16 Cloud Insights session video).

Customers can currently register to use the preview version at Cloud.netapp.com/Cloud Insights. There’s a registration wall but that’s all it takes to get started. .

The minimum Cloud Insights instance is a single server and 5TB of storage. Unlike OCI, Cloud Insights is tailored to support smaller shops without significant infrastructure. However, Cloud Insight also offers standard onprem enterprise infrastructure monitoring as well.

Cloud Insights is also focused on modern, cloud-native applications whether they operate on prem or in the cloud. The problem with cloud native, container apps is that they come and go in seconds, and there’s thousands of them. Cloud Insights was designed specifically for container and other cloud native applications and as such, should provide a more accurate monitoring of operations for these systems.

We talked about Cloud Insight’s development cadence. James said that because it’s a SaaS offering new Cloud Insights functionality can be released daily, if not more frequently. Contrast that with OCI, where they schedule 3-4 releases a year.

Cloud Insight currently supports the Kubernetes container ecosystems today but more are on the way. Again, customers determine which Container or other cloud native ecosystems will be supported next.

The podcast runs ~22 minutes. James was very knowledgeable about OCI, Cloud Insights and infrastructure monitoring in general and he was easy to talk with. Howard and I had a great time at SFD16 and enjoyed our time talking with him again on the podcast.  Listen to the podcast to learn more.

James Holden, Senior Product Manager NetApp OCI and Cloud Insights 

 

James Holden is a Senior Manager of Product Management at NetApp, and for the last 5 years  has been building the infrastructure monitoring and reporting tool OnCommand Insight.

Today he is working across NetApp’s Cloud Analytics portfolio, including Cloud Insights, a new SaaS offering currently in preview.

Prior to NetApp, James worked for 14 years at CSC in both the US and the UK on their storage, compute and automation solutions.