36: GreyBeards discuss VMworld2016 with Andy Banta, Storage Janitor, NetApp Solidfire

CrN2cn2VYAEULCZ.jpg-large
Thanks Andy Warfield, Coho Data

In this episode, we talk with Andy Banta (@andybanta), Storage Janitor (Principal Virt. Architect), Netapp SolidFire. Andy’s been involved in Virtual Volumes (VVOLs) and other VMware API implementations at SolidFire and worked at VMware and other storage/system vendor companies before that.

Howard and I were at VMworld2016 late last month and we thought Andy would be a good person to discuss what went there this year.

No VVOLs & VSAN news at the show

Although, we all thought there’d be another release of VVOLs and VSAN announced at the show, VMware announced Cloud Foundation and Cross-Cloud Services. If anything the show was a bit mum about VMware Virtual Volumes (VVOLs) and Virtual SAN™ (VSAN) this year as compared to last.

On the other hand, Andy’s and other VVOL technical sessions were busy at the conference. And one of them ended up having standing room only and was repeated at the show, due to the demand. Customer interest in VVOLs seems to be peaking.

Our discussion begins with why VVOLs was sidelined this year. One reason was that there was a  focus from VMware and their ecosystem on Hyper Converged Infrastructure (HCI) and HCI doesn’t use storage arrays or VVOL.

Howard and I suspected with VMware’s ecosystem growing ever larger, validation and regression testing is starting to consume more resources. But Andy, suggested that’s not the issue, as VMware uses self-certification, where vendors run tests that VMware supplies to show they meet API requirements. VMware does bring in a handful of vendor solutions (5 for VVOLs) for reference architectures and to insure the APIs meet (major) vendor requirements but after that, it’s all self certification.

Another possibility was  that the DELL-EMC acquisition (closed 9/6) could be  a distraction. But Andy said VMware’s been and will continue on as an independent company and the fact that EMC owned ~84% of the stock never impacted VMware’s development before. So DELL’s acquisition shouldn’t either.

Finally we suggested that executive churn at VMware could be the problem. But Andy debunked that and said the amount of executive transitions hasn’t really accelerated over the years.

After all that, we concluded that just maybe the schedule had slipped and perhaps we will see something new in Barcelona for VVOLs and VMware APIs for Storage Awareness (VASA), at VMworld2016 Europe.

Cloud Foundation and Cross-Cloud Services

What VMware did announce was VMware Cloud Foundation and Cross-Cloud Services. This seems to signal a shift in philosophy to be more accommodating to the public cloud rather than just competing with them.

VMware Cloud Foundation is a repackaging of  VMware Software Defined Data Center (SDDC), NSX®,  VSAN and vSphere® into a single bundle that customers can use to spin up a private cloud with ease.

VMware Cross-Cloud Services is a set of targeted software for public cloud deployment to ease management and migration of services . They showed how NSX could be deployed over your cloud instances to control IP addresses and provide micro-segmentation services and how other software allows data to be easily migrated between the public cloud and VMware private cloud implementations. Cross Cloud Services was tech previewed at the show and Ray wrote a  post describing them in more detail (please see VMworld2016 Day 1 Cloud Foundation & Cross-Cloud Services post).

Cloud services

Howard talked about how difficult it can be to move workloads to the cloud and back again. Most enterprise application data is just too large to transfer quickly and to complex to be a simple file transfer.  And then there’s legal matters for data governance, compliance and regulatory regimens that have to be adhered to which make it almost impossible to use public cloud services.

On the other hand, Andy talked about work they had done at SolidFire to use cloud in development. They moved some testing to the cloud to spin up 1000s of (SolidFire simulations) instances to try to catch an infrequent bug (occurring once every 10K runs).  They just couldn’t do this in their lab. In the end they were able to catch and debug the problem much more effectively using public cloud services.

Howard mentioned that they were also using AWS as an IO trace repository for benchmark development work he is doing. AWS S3 as a data repository has been a great solution for his team, as anyone can upload their data that way. By the way, he is looking for a data scientist to help analyze, this data if anyone’s interested.

In general, workloads are becoming more transient these days. Public cloud services are encouraging this movement but Docker and micro services are also having an impact.

VVOLs

One can even see this sort of trend in VMware VVOLs, which can be  another way to enable more transient workloads. VVOLs can be created and destroyed a lot quicker than Vdisks in the pasts. In fact, some storage vendors are starting to look at VVOLs as transient storage and are improving their storage and meta-data garbage collection accordingly.

Earlier this year Howard, Andy and I were all at a NetApp SolidFire Analyst event in Boulder. At that time, SolidFire said that they had implemented VVOLs so well they considered “VVOLs done right”.  I asked Andy what was different with SolidFire’s VVOL implementation. One thing they did was completely separate the Protocol endpoints from the storage side. Another was to provide QoS at the VM level that could be applied to a single or 1000s of VMs

Andy also said that SolidFire had implemented a bunch of scripts to automate VVOL policy changes across 1000s of objects. SolidFire wanted to make use of these scripts for their own VVOL implementation but as they could apply to any vendors implementation of VVOLs, they decided to open source them.

The podcast runs over 42 minutes and covers a broad discussion of the VMware ecosystem, the goings on at VMworld and SolidFire’s VVOL implementation. Listen to the podcast to learn more.

Andy Banta, Storage Janitor, NetApp SolidFire

saturday_drive1_400x400

Andy is currently a Storage Janitor acting as a Principal Virtualization Architect at NetApp SolidFire, focusing on VMware integration and Virtual Volumes.  Andy was a part of the Virtual Volumes development team at SoldiFire.

Prior to SolidFire, he was the iSCSI Tech Lead at VMware, as well as being on the engineering teams at DataGravity and Sun Microsystems.

Andy has presented at numerous VMworlds, as well as several VMUGs and other industry conferences. Outside of work, and enjoys racing cars, hiking and wines. Find him on twitter at @andybanta.

GreyBeards deconstruct storage with Brian Biles and Hugo Patterson, CEO and CTO, Datrium

In this our 32nd episode we talk with Brian Biles (@BrianBiles), CEO & Co-founder and Hugo Patterson, CTO & Co-founder of Datrium a new storage startup. We like to call it storage deconstructed, a new view of what storage could be based on today and future storage technologies.  If I had to describe it succinctly, I would say it’s a hybrid between software defined storage, server side flash and external disk storage.  We have discussed server side flash before but this takes it to a whole another level.

Their product, the DVX consists of Hyperdriver host software and a NetShelf, external disk storage unit. The DVX was designed from the ground up based on the use of host/server side flash or non-volatile memory as a given and built everything else around that. I hesitate to say this but the DVX NetShelf backend storage is pretty unintelligent, just a dual controller disk storage with a multi-task coordinator. In contrast, the DVX Hyperdriver host software used to access their storage system is pretty smart and is installed as a VIB in vSphere. Customers can assign up to 8TB of host-based, server side flash/non-volatile memory to the storage system per server. The Datrium DVX does the rest.

The Hyperdriver leverages host flash, DRAM and compute cores to act as a caching layer for read and write IO and as a data management engine. Write data is write-thru straight from the server side flash to the NetShelf storage system which has Non-volatile DRAM (NVRAM) caching. Once write data is in NetShelf cache, it’s in two places, one on the host server side flash and the other in storage NVRAM. Reads are easier to handle, just being cached from the NetShelf storage in the server side flash. There’s no unique data residing in the hosts.

The Hyperdriver looks like a NFS mount to vSphere and the DVX uses a proprietary protocol to talk with the backend DVX NetShelf. Datrium supports up to 32 hosts and you can define the amount of Flash, DRAM and host compute allocated to the DVX Hyperdriver activity.

But the other interesting part about DVX is that much of the storage management functionality and storage control logic is partitioned between the host  Hyperdriver and NetShelf, with both participating to do what they do best.

For example,  disk rebuilds are done in combination with the host Hyperdriver. DVX RAID rebuild brings data from the backend into host cache, computes rebuild data and writes the reconstructed data back out to the NetShelf backend. This way rebuild performance can scale up with the number of hosts active in a cluster.

DVX data are compressed and deduplicated at the host before being sent to the NetShelf. The NetShelf backend also does a global deduplication on the host data. Hashing computations and data compression activities are all done on the host and passed on to the NetShelf.  Brian and Hugo were formerly with EMC Data Domain, and know all about data deduplication.

At the moment DVX is missing some storage functionality but they have an extensive roadmap with engineering resources to match and are plugging away at all of it. On the other hand, very few disk storage devices offer deduped/compressed data storage and warm server side caches during vMotion. They also support QoS functionality to limit the amount of host resources consumed by DVX Hyperdriver software

The podcast runs ~41 minutes and episode covers a lot of ground about how the new DVX product came about, how they separated storage functionality between host and backend and other aspects of DVX storage.  Listen to the podcast to learn more.

AAEAAQAAAAAAAAK8AAAAJGQyODQwNjg1LWI3NTMtNGY0OC04MGVmLTc5Nzg3N2IyMmEzYQBrian Biles, Datrium CEO & Co-founder

Prior to Datrium, Brian was Founder and VP of Product Mgmt. at EMC Backup Recovery Systems Division. Prior to that he was Founder, VP of Product Mgmt. and Business Development for Data Domain (acquired by EMC in 2009).

Hugo Patterson, Datrium CTO & Co-founderAAEAAQAAAAAAAANZAAAAJDhiMTI2NzMyLTdkZDAtNDE5Yy1hMTM5LTNiMWM2MWM3NTlmMA

Prior to Datrium, Hugo was an EMC Fellow serving as CTO of the EMC Backup Recovery Systems Division, and the Chief Architect and CTO of Data Domain (acquired by EMC in 2009), where he built the first deduplication storage system. Prior to that he was the engineering lead at NetApp, developing SnapVault, the first snap-and-replicate disk-based backup product. Hugo has a Ph.D. from Carnegie Mellon.

 

GreyBeards talk VVOLs with “Father of VVOLs”, Satyam Vaghani, CTO PernixData

In this podcast we discuss VMware VVOLs with Satyam Vaghani, Co-Founder and CTO PernixData. In Satyam’s previous job, he was VMware’s Principal Engineer and Storage CTO   and worked extensively on VVOLs and other VMware storage enhancements. He also happens to be the GreyBeards second repeat guest.

With vSphere 6 coming out by the end  of this quarter, it’s a good time to talk about VVOLs and VASA 2.0.

In the podcast, Ray and Howard got a bit wild on the terminology we used to describe how VMware VVOLs work. Satyam wanted to be sure that we at least provided a decoder ring to get us back to proper VMware terminology.

  • So in the podcast when we discuss the magic LUN, control LUN or the container LUN, VMware calls this the  Protocol Endpoint (PE). VMware uses the+ PE for a message passing interface to inform a storage system what IO to perform. Although technically in block storage the PE is a LUN, it really has no data storage behind it, rather it’s only used as a message box to perform IO on other storage objects.
  • In the podcast when we talk about micro-LUNs, sub-LUNs or VM data objects. VMware calls these items a Virtual Volume (VVOL). VVOLs represent a new version of VMDK. But because VVOLs  no longer have to reside with other VVOLs (VMDKs) on the same LUN, they can be replicated, snapshotted, cloned, etc., all by themselves, without having to impact other VVOLs in the storage system.

VMware is also releasing VASA 2.0 to provide an easier, more standardized approach to provisioning VVOLs. Together, VVOLs and VASA 2.0 should theoretically greatly reduce the burden on VMware storage administration.

We go into more detail how block storage VVOLs work, the benefits of VVOLs-VASA 2.0, and many other items in our discussions with Satyam.  Listen to the podcast to learn more…

This months episode runs about 45 minutes. 

Satyam Vaghani Bio’s

Satyam Vaghani, Co-founder and CTO Pernixdata
Satyam Vaghani is Co-Founder and CTO at PernixData, a company that leverages server flash to enable scale-out storage performance that is independent of capacity. Earlier, he was VMware’s Principal Engineer and Storage CTO where he spent 10 years delivering fundamental storage innovations in vSphere. He is most known for creating the Virtual Machine File System (VMFS) that set a storage standard for server virtualization. He has authored 50+ patents, led industry-wide changes in storage systems and standards via VAAI, and has been a regular speaker at VMworld and other industry and academic forums. Satyam received his Masters in CS from Stanford and Bachelors in CS from BITS Pilani.