When 64 nodes are not enough

Why would VMware with years of ESX development behind them want to develop a whole new virtualization system for Docker and other container frameworks. Especially since they already have a compatible Docker support in their current product line.

The main reason I can think of is that a 64 node cluster may be limiting to some container services and the likelihood of VMware ESX/vSphere to supporting 1000s of nodes in a single cluster seems pretty unlikely. So given that more and more cloud services are being deployed across 1000s of nodes using container frameworks, VMware had to do something or say goodbye to a potentially lucrative use case for virtualization.

Yes over time VMware may indeed extend vSphere clusters to 128 or even 256 nodes but by then the world will have moved beyond VMware services for these services and where will VMware be then – left behind.

Photon to the rescue

With the new Photon system VMware has an answer to anyone that needs 1000 to 10,000 server cluster environments. Now these customers can easily deploy their services on a VMware Photon Platform which is was developed off of ESX but doesn’t have any cluster limitations of ESX.

Thus, the need for Photon was now. Customers can easily deploy container frameworks that span 1000s of nodes. Of course it won’t be as easy to manage as a 64 node vSphere cluster but it will be easy automated and easier to deploy and easier to scale when necessary, especially beyond 64 nodes.

The claim is that the new Photon will be able to support multiple container frameworks without modification.

So what’s stopping you from taking on the Amazons, Googles, and Apples of the worlds data centers?

  • Maybe storage, but then there’s ScaleIO, and the other software defined storage solutions that are there to support local DAS clusters spanning almost incredible sizes of clusters.
  • Maybe networking, I am not sure just where NSX is in the scheme of things but maybe it’s capable of handling 1000s of nodes and maybe not but networking could be a clear limitation to what how many nodes can be deployed in this sort of environment.

Where does this leave vSphere? Probably continuation of the current trajectory, making easier and more efficient to run VMware clusters and over time extending any current limitations. So for the moment two development streams based off of ESX and each being enhanced for it’s own market.

How much of ESX survived is an open question but it’s likely that Photon will never see the VMware familiar services and operations that is readily available to vSphere clusters.

Comments?

Photo Credit(s): A first look into Dockerfile system

#VMworld2015 day 1 announcements

 

IMG_5411It seemed like today was all about the cloud and cloud native apps. Among the many announcements, VMware announced two key new capabilities: VMware integrated containers and the Python Photon Platform.

Containers running on VMware

  • VMware vSphere Integrated Containers is an implementation of containers that runs natively under vSphere. The advantage of this solution is that now when developers fire up a multi-container app,  each container now exists as a separate VM under vSphere and can be managed, monitored and secured just like any other VM in the environment. Previously a multi-container app would be one VM per container engine  containing potentially many containers running under the single VM. But with vSphere Integrated Containers, the container engine and the light weight Linux kernel (Python Photon OS) are now integrated into the ESX hypervisor so each container runs as a native VM. Integrated containers is an follow on to a combination of Project Bonneville, Project Python Photon (OS) and Instant clones. Recall with Instant Clones one can spin up a clone of a VM in less than a second and its memory footprint is 0MB.
  • Python Photon Platform takes container execution to a whole new level, with a new deployment of a hypervisor tailor made to run containers (not VMs). With the Python Photon Platform one natively runs container frameworks underneath the platform. Python Photon Platform consists of Python Photon Machine which is Python Photon OS (lightweight Linux Kernel distro) & the new Microvisor (new light weight hypervisor for container hardware calls) and Python Photon Controller which is a distributed control plane and management API. With Python Photon Platform one can manage 100K to Millions of containers, running under 1000s of container frameworks.

Over time Python Photon Platform is intended to be open sourced. VMware also announced a bundling of Pivotal Cloud Foundry with the Python Photon Platform so as to better run cloud native apps implemented in Cloud Foundry. But the ultimate intent is to provide support for Google Kubernetes, Apache Mesos and any other container framework that comes out.

So now you can run your Docker container apps or any other container app solution in two different ways. One depends on vSphere standard management platform and runs container apps as a standard VMs. The other takes a completely green field approach and runs container frameworks natively in a ground up new hypervisor solution with a new management solution altogether that scales.

The advantage of Python Photon is that it scales to extreme, cloud level types of application environments. Python Photon is intended to run cloud-native apps.

vCloud Air extensions

One of the other major things that VMware demoed today was moving a VM from on premises to vCloud Air and back again – a real crowd pleaser. One VMware Exec said that after MIT had convinced them they needed to be able to move apps from on premises to the cloud for dev-test apps. They then turned around and decided they wanted to move dev-test activity back to their onprem environment and instead wanted to move their production to vCloud Air.

They demoed both capabilities using vMotion to move a VM to vCloud Air and using it again to move it back. The nice thing about all this is that all the security and other attributes of the VM can move to the cloud and back again along with the VM. All the while the VM continued to operate, with no disruption to execution. They mention that it could potentially take hours to move the data for the VM.

IMG_5413There were a number of other capabilities announced today including EVO SDDC (EVO: RACK reborn) which includes a new datacenter management solution. Customers can now roll in a rack of servers and have EVO SDDC manage them and deploy software defined data center on them in a matter of hours. Within EVO SDDC you can have application domains which span racks of servers but provide isolation and management multi-tennancy.

NSX 6.2 was also discussed and essentially is key to extending your networking from on premises to vCloud Air. With NSX 6.2 local routing, micro segmentation security and app firewalls can be configured locally and then be “extended” to the vCloud Air environment.

Lots of moving parts here and I probably missed some key components to these solutions and didn’t cover any of them well enough other than to give a feel for what they are.

But one thing is clear, VMware’s long term strategy is to take your native, on premises VMs to vCloud Air and back again as well as if your Dev-Ops group or any other BU wants to use containers to implement cloud apps, VMware has you covered coming and going.

Comments?