VMworld2017’s forecast, cloudy with a high chance of containers

Attended VMworld2017 this past week in Vegas and aside from all the parties there was a lot of news, mostly for public cloud users.

In talking with analysts and others at the show it seems like VMware has recently discovered that they can’t fight the cloud, so they better join them. Early this year VMware divested itself of its vCloud Air Business to OVH, which removed their owned competition to the cloud. Now, VMware’s on a different tack, figuring out how to best work with today’s public cloud providers and implementing this.

Last year VMware announced an agreement with IBM and to supply vCloud Air services on IBM’s SoftLayer public cloud. This year, VMware ramps up other public cloud offerings with VMware Cloud on AWS and PKS (Pivotal Container Services) on vSphere.

First up, VMware on the (AWS) cloud

You may recall that earlier this year VMware showed a tech preview of vSphere running in AWS. At VMworld2017 they took off the wraps on this service and made it real. At first it’s only available in AWS US WEST region but they plan to roll it out to the rest of US soon and rest of the world after that.

VMware Cloud on AWS is vSphere, vCenter, NSX, and vSAN running ontop of AWS Elastic cloud services. Essentially, any VM that you run onprem, can be run on AWS, using VMware Cloud on AWS.

The AWS EC2 machines you run VMware on are BIG – 2 CPU, 36 cores (72 hyper threads) with 512GiB of memory and a local (SSD) cache of 3.6TB/10.7TB raw capacity. VMware Cloud on AWS requires four EC2 instances to run. No information about the networking capabilities but I assume HIGH SPEED.

The cost for the service is high but you are paying for 7x24x365 AWS EC2 services. For a 3 year “reservation”, it will cost $109.4K/host. That comes out to be about $3K/month/host for 36 months. VMware claims that on a 3 year TCO basis this would be cheaper than running an equivalent configuration onprem.

You can also contract for VMware Cloud on AWS on an hourly basis. You do have to have a VMware login and VMware credits (?) to do so. It’s certainly not as simple as just having a credit card and an AWS login. But the costs for this are $8.361/hour/host. This seems awfully high but there’s no direct comparison to other EC2 machine configurations. Although there is an EC2 X1.16 with 64 vCPUs (hyper thread equivalents), 976GiB DRAM and 1-1920 (GiB) SSD that lists for $6.669/hour – close, but not a complete match.

You are running a VMware service on AWS so the billing is done through VMware. And any data you move in or out of the cloud will be billed (through VMware) at whatever AWS would charge for the data egress/import.

It seems that if you “connect” your VMware Cloud on AWS to your onprem   vSphere cluster (through stretched layer 2 NSX networking and ? other means) you can vMotion VMs from onprem to AWS and back again. There is a behind the scenes Storage vMotion that also happens to get the data to AWS so that the VMs can operate properly.

VMware vCenter offers a dashboard of sorts to tell admins whether a particular VM is a good candidate to move to AWS or not. This is based on the VM’s connections to other VMs and maybe the amount of data that would need to moved.


Next, (PKS) containers and more (GCP) cloud

VMware together with Pivotal and Google Cloud announced a tech preview of the Pivotal Container Service (PKS) on vSphere. The new service implements Pivotal Kubo, or Kubernetes container orchestration with Bosh HA infrastructure management ontop of vSphere. PKS also comes with Harbor a secure, enterprise class container registry from VMware

This would allow a development team to develop a container micro-services application, completely within a VMware environment and to run it under vSphere. This seems tailor made to cloud developers.

Kubernetes has worker and master nodes and each which would run as a VM on vSphere. Inside worker nodes, Kubernetes runs Pods which have one or more tightly connected container(s) which enclose an application and share context.

I was talking with the vSphere team and they had been spending a lot of time making vSphere native services available to PKS. This means that you can use NSX networking and vSAN, VVOLs or VMDK storage for your container (persistent) storage.

Not exactly sure where DevOps fits into PKS on vSphere but my assumption is that you could run PuppetChef or if your up to the challenge, vRA to automate application roll out.

There was specific talk of having PKS run on AWS, probably within VMware Cloud on AWS in the future.

Of course, PKS containers that run on vSphere are completely compatible with GKE (Google Container Engine) which runs on Google Cloud Platform

No information on VMware PKS pricing as of yet.

Where lies Photon and VIC (VMware Integrated Containers)

You may recall that VMware announced Photon last year which was a open source container framework and Photon OS which was an OS for Photon containers. This still exists as an open source project and is still being developed but there was nary a word about Photon this year.

VIC still exists. VIC can support running a container as a VM but is not a real container orchestration engine. Yes you could potentially run Docker Swarm as VM or a number of containers as separate VMs under VI, but this is not the same as having a fully integrated container orchestration and management service layer in vSphere. That’s where PKS fits in.

~~~~

Although timelines weren’t discussed there were a number of discussions that led me to believe that VMware on AWS would be rolled out to other public cloud provider (read Azure and GCP). And how long it would take to be rolled out to other AWS regions around the world was not discussed.  VMware Cloud would really make sense to run on GCP, but Azure might be a bit of a stretch.

Similarly, PKS seems already heading for VMware Cloud on AWS and is already available in native form as GKE on GCP. But Azure already has a native Kubernetes Container Service. And there was no discussion as to whether PKS would be made available on IBM Softlayer or OVH vCloud Air.

Stay tuned more to come as VMware finds its true path to the cloud.

NetApp updates their StorageGRID Webscale solution

grid001NetApp announced a new version of their object storage solution, the StorageGRID WebScale 10.3.

At a former employer, I first talked with StorageGRID (Bycast at the time) a decade or so ago. At that time, they were focused on medical and healthcare verticals and had a RAIN (redundant array of independent nodes) storage solution.  It has come a long way.

StorageGRID Business is booming

On the call, NetApp announced they sold 50PB of StorageGRID in FY’16 with 20PB of that in the last quarter and also reported 270% Y/Y revenue growth, which means they are starting to gain some traction in the marketplace. Are we seeing an acceleration of object storage adoption?

As you may recall, StorageGRID comes in a software only solution that runs on just about any white box server with DAS or as two hardware appliances: the SG5612 (12 drive); and the SG5660 (60 drive) nodes. You can mix and match any appliance with any white box software only solution, they don’t have to have the same capacity or performance. But all nodes need network and controller/admin node(s) access.

StorageGRID past

grid002Somewhere during Bycast’s journey they developed support for tape archives and information lifecycle management (ILM) for objects. The previous generation, StorageGrid 10.2 had a number of features, including:

  • S3 cloud archive support that allowed objects to be migrated to AWS S3 as they were no longer actively accessed
  • NAS bridge support that allowed CIFS/SMB or NFS access to StorageGRID objects, which could also be read as S3 objects for easier migration to/from object storage;
  • Hierarchical erasure coding option that was optimized for efficiently storing large objects;
  • Node level erasure coding support that can be used to rebuild data for node drive failures, without having to go outside the node data retrieval;
  • Object byte-granular range read support that allowed users to read an object at any byte offset without requiring rebuild;
  • Support for OpenStack Swift API that made StorageGRID objects natively available to any OpenStack service; and
  • Software support for running as Docker containers or as a VM under VMware ESX, or OpenStack KVM that allowed StorageGRID software to run just about anywhere.

StorageGRID present and future

grid003But customers complained StorageGRID was too complex to install and update which required too much hand holding by NetApp professional services. StorageGRID Webscale 10.3 was targeted to address these deficiencies. Some of the features in StorageGrid 10.3, include:

  • Radically simplified, more modern UI, new dashboard and policy wizard/editor, so that it’s a lot easier to manage the StorageGRID. All features of the UI are also available via RESTfull API access and the UI is the same for white box, software only implementations as well as appliance configurations.
  • Simplified automated installation scripts, so that installations that used to take multiple steps, separate software installs and required professional services support, now use a full-solution software stack install, take only minutes and can be done by the customers alone;
  • S3 object versioning support, so that objects can have multiple versions, limited via the UI, if needed, but provide a snapshot-like capability for S3 data that protects against object accidental deletion.
  • grid004ILM policy change predictions/modeling, so that admins can now see how changes to ILM policies will impact StorageGRID.
  • Even more flexibility in DAS storage, so that future StorageGRID configurations can support 10TB drives and 6TB FIPS-140 drive encryption support, which adds to the current drive capacity and data security options already available in StorageGRID.

To top it all off, StorageGRID 10.3 improves performance for both small (30KB) and large (300MB) object get/puts.

  • Small S3 Load Data Router (LDR, 1-thread) object performance has improved ~4X for both PUTs and GETs; and
  • Large S3 LDR (1-thread) object performance has improved ~2X for PUTs and ~4X for GETs.

Object storage market heating up

grid005Apparently, service providers are adopting object storage to  provide competition to AWS, Azure and Google cloud storage for backup and storage archives as well as for DR as a service. Also, many media and other customers managing massive data repositories are turning to object storage to support their multi-site, very large file libraries.  And as more solution vendors support S3 object protocols for data access and archive, something like StorageGRID can become their onsite-offsite storage alternative.

And Amazon, Azure and Google are starting to realize that most enterprise customers are not going to leap to the cloud for everything they do. So, some sort of hybrid solution is needed for the long term. Having an on premises and off premises object storage solution that can also archive/migrate data to the cloud is a great hybrid alternative that takes enterprises one step closer to the cloud.

Comments?

#VMworld day 1, Cloud Foundation and Cross-Cloud Services

The main keynote topic for today at VMworld was how to address the coming cloud tsunami. Pat citing his own researchers believes that 50% of all workloads (OS instances) will be running in public and private cloud by 2021 and by 2030, 50% of all workloads will be running in the Public Cloud alone. So today VMware announced two new offerings: VMware Cloud Foundation and VMware Cross-Cloud Services.

Cloud Foundation

Cloud Foundation appears to be a bundling of VMware’s SDDC, NSX®, Virtual SAN™ (VSAN) and vSphere® solutions, into a single, integrated stack/package that can be sold and licensed together. No pricing was provided at the show but essentially VMware want’s to allow customers a simple way to deploy a VMware private cloud.

VMware states that Cloud Foundation offers customers up to 6-8X faster cloud deployment at a TCO savings of >40%.

VMware also announced a joint partnership with IBM to sell Cloud Foundation services residing on the IBM Cloud to their customer base. This broaden’s the availability of VMware cloud service offerings beyond vCloud and on premises Cloud Foundation environments.

Cross-Cloud Services

IMG_6819Everyone wants to minimize cloud vendor lockin but that’s not possible today except in a few special cases (NetApp Private Storage and similar capabilities from other vendors, cloud storage gateway services, cloud archive services, etc.).

VMware Cross-Cloud Services is the next step down this path, attempting to provide easier workload/data migration, consolidated cost and workload management and security deployment across the public and private cloud boundaries.

Cross-Cloud Services was in tech preview at the show but it’s intended to make use of standard public cloud defined APIs to provide specialized targeted services to allow better cross-cloud migration and management.

The tech preview showed VMware Cross-Cloud Services deploying an NSX gateway in AWS which allowed NSX to control public cloud IP addresses and then once that was done, one could apply security templates to deploy network encryption between apps and its services. VMware used a sniffer to show the before plain text traffic and the after with encrypted traffic, all done in a matter of minutes. They also showed cost trending information for workloads running across the private and public cloud.

Next they showed a demo (movie) of VMware migrating/cloning a simple app to other public and private cloud environments. They had a public cloud Unicycle IOT app running in Ireland/AWS (I think) with a three tier (web, app, database) app structure/instances and then migrated/cloned that single site 3-tier app to be deployed across multiple cloud (web and app tiers) sites with a single database instance running in a private cloud.

I started thinking this is getting us down the path towards cloud virtualization but in the end, it’s much more targeted services, which run in instances/gateways in the public and private cloud to do very specific migration or management activities. Nonetheless a great first step towards more flexible cross-cloud deployment and management.

VMworld Day 2 looks to be more on current products and enhancements, stay tuned.

Comments?

Primary data’s path to better data storage presented at SFD8

IMG_5606rz A couple of weeks ago we met with Primary Data, Lance Smith, CEO, David Flynn, CTO and Kaycee Lai, SVP Product & Sales who were presenting at Storage Field Day 8 (SFD8, videos of their sessions available here). Primary Data has just emerged out of stealth late last year and has ~$60M in funding. Also they have Steve Wozniak (of Apple fame) as Chief Scientist, but he wasn’t at the SFD8 session 🙁

Primary Data seems out to change the world. At first I thought this was just another form of storage virtualization but they are laser focused on data virtualization or what they call data mobility. It differs from pure storage virtualization by being outside the data path.  (I have written about data virtualization before as well as the data hypervisor a long time ago). Nowadays they seem to be using the tag line of data in motion.

Why move data?

David has a theory behind the proliferation of startup storage companies. The spectrum behind capacity and performance has gotten immense, over time, which has provided an opening for a number of companies to address these widening needs.

David believes that caching at the storage system or in the servers is an attempt to address this issue by “loaning” the data from the storage silo to the cache. This is trying to supply a lower cost $/IOP for the data. Similar considerations are apparent at the other side where customer’s use archive or backup services to take advantage of much cheaper $/GB storage.

However, given the difficulty of moving data around in present day storage environments, customer data has become essentially immobile. Primary Data is trying to bring about a data mobility revolution and allow data to move over this spectrum of performance and capacity of storage with ease. Doing so easily, will provide significant benefits as customers can more fully take advantage of the various levels of performance and capacity in their data center storage environments.

Primary Data architecture

IMG_5607Primary Data is providing data mobility by using their meta-data service called the DataSphere appliance and their client software running on host servers called the Data Portal. Their offering can be best explained in three layers:

  • Data virtualization layer – provides continuity of identity and continuity of access across multiple physical storage systems. That is the same data (identity continuity) can be accessed wherever it resides (access continuity) by server applications. Such access and identity must transcend access protocols and interfaces. The Data Portal client software intercepts the server data activity and does control plane activity using the DataSphere appliance and performs IO directly using the physical storage.
  • Objective based data management – supplies a data affinity service. That is data can have a temporary location relationship with physical storage depending on the current performance (R:W, IOPS, bandwidth, latency) and protection (durability, availability, disaster recoverability, security, copy-ability, version-ability) characteristics of the data. These data objectives are matched to the capabilities or service catalog of the storage infrastructure and data objectives can change over time
  • Analytics in the loop – detects the performance and other characteristics of the storage and data in real-time. That is by monitoring the storage IO activity Primary Data can determine the actual performance attribute of the storage. Similarly, by monitoring the applications IO characteristics over time the system can determine the performance objectives of its data. The system also takes advantage of SMI-S to define some of the other characteristics of the storage systems.

How does Primary Data work?

Primary Data has taken advantage of parallel NFS extensions (pNFS) in NFSv4 to externalize and separate the storage control plane from the IO data plane. This works well for native Linux where the main developer of the Linux file system stack is on their payroll.IMG_5608rz

In Windows they put a filter driver in front of SMB to split off the control from data IO plane. Something similar is done for VMware ESX servers to supply the control-data plane split but in this case there is a software defined Data Portal that goes along with the DataSphere Service client that can do it all within the same ESX server. Another alternative exists and that is to use the Data Portal appliance as a storage virtualization service but then the IO data path goes through the portal.

According to their datasheet they currently support data virtualization services for NetApp cDOT and 7-mode, EMC Isilon OneFS7.2, and Nexenta 4.x&5.0 but plan on more.

They are not quite GA yet, but are close.

Comments?

 

 

 

The data is the hybrid cloud

CRKtHnqVEAABeviI have been at NetApp Insight2015 conference the past two days and have been struck with one common theme. They have been talking since the get-go about the Data Fabric and how Clustered Data ONTAP (cDOT) is the foundation to the NetApp Data Fabric which spans on premises, private cloud, off premises public cloud and everything in between.

But the truth of the matter is that it’s data that real needs to span all these domains. Hybrid cloud really needs to have data movement everywhere. NetApp cDOT is just the enabler that helps move the data around much easier.

NetApp cDOT data services

From a cDOT perspective, NetApp has available today:

  • Cloud ONTAP – a software defined ONTAP storage service executing in the cloud, operating on cloud server provider hardware using DAS storage and providing ONTAP data services for your private cloud resident data.
  • ONTAP Edge – similar to Cloud ONTAP, but operating on premises with customer commodity server & DAS hardware and providing ONTAP data services.
  • NetApp Private Storage (NPS) – NetApp storage systems operating in a “near cloud” environment that is directly connected to cloud service providers that provides NetApp storage services with low latency/high IOPs storage to cloud compute applications.
  • NetApp cDOT on premises storage hardware – NetApp storage hardware with All Flash FAS as well as normal disk-only and hybrid FAS storage hardware supplying ONTAP data services to on premises applications.

NetApp Data Fabric

NetApp’s Data Fabric is built on top of ONTAP data services and allows a customer to use any of the above storage instances to host their private data. Which is great in and of itself, but when you realize that a customer can also move their data from anyone of these ONTAP storage instances to any other storage instance that’s when you see the power of the Data Fabric.

The Data Fabric depends mostly on storage efficient ONTAP SnapMirror data replication and ONTAP data cloning capabilities. These services can be used to replicate ONTAP data (LUNs/volumes) from one cDOT storage instance to another and then use ONTAP data cloning services to create accessible copies of this data at the new location. This could be on premises to near cloud, to public cloud or back again, all within the confines of ONTAP data services.

Data Fabric in action

Now I like the concept but they also showed an impressive demo of using cDOT and AltaVault (NetApp’s solution acquired last year from Riverbed, their SteelStor backup appliance) to perform an application consistent backup of a SQL database. But once they had this it went a little crazy.

They SnapMirrored this data from the on premises storage to a near cloud, NPS storage instance, then cloned the data from the mirrors and after that fired up applications running in Azure to process the data. Then they shut down the Azure application and fired up a similar application in AWS using the exact same NPS hosted data. Of course they then SnapMirrored the same backup data (I think from the original on premises storage) to Cloud ONTAP, just to show it could be done there as well.

Ok I get it, you can replicate (mirror) data from any cDOT storage instance (whether on premises or remote site or near cloud NPS or in the cloud using Cloud ONTAP or …). Once there you can clone this data and use it with applications running in any environment running with access to this data instance (such as AWS, Azure and cloud service providers).

And I like the fact that all this can be accomplished in NetApp’s Snap Center software. And I especially like the fact that the clones don’t take up any extra space and the replicant mirroring is done in a quick, space efficient (read deduped) manner

But, having to setup a replication or mirror association between cDOT on premises and cDOT at NPS or Cloud ONTAP and then having to clone the volumes at the target side seems superflous. What I really want to do is just copy or move the data and have it be at the target site without the mirror association in the middle. It’s almost like what I want is CLONE that operates across cDOT storage instances wherever they reside.

Well I’m an analyst and don’t have to implement any of this (thank god). But what NetApp seems to have done is to use their current tools and ONTAP data service capabilities to allow customer data to move anywhere it needs to be, in  customer controlled, space efficient, private and secure manner. Once hosted at the new site, applications have access to this data and customers still have all the ONTAP data services they had on premises but in cloud and near cloud locations.

Seems pretty impressive to me for all of a customer’s ONTAP data. But when you combine the Data Fabric with Foreign LUN Import (importing non-NetApp data into ONTAP storage) and FlexArray (storage virtualization under ONTAP) you can see how all the Data Fabric can apply to non-NetApp storage instances as well and then it becomes really interesting.

~~~~

There was a company that once said that “The Network is the Computer” but today, I think a better tag line is “The Data is the Hybrid Cloud”.

Comments?