Platform9, a whole new way to run OpenStack

logo-long2
At TechFieldDay 10 (TFD10), in Austin this past week we had a presentation from Platform9‘s Shirish Raghuram Co-founder and CEO and Bich Le, Co-founder and Chief Architect. Both Shirish and Bich seemed to have come from having  worked a long time at VMware and prior to that, other tech giants.

Platform9 provides a user friendly approach to running OpenStack in your data center. Their solution is a SaaS based, management portal or control plane for running compute, storage and networking infrastructure under OpenStack, the open source cloud software.

Importing running virtualization environments

If you have a current, running VMware vSphere environment, you can onboard or import portions of or all of your VMs, datastores, NSX nodes, and the rest of the vSphere cluster and have them all come up as OpenStack core compute instances, Cinder storage volumes, and use NSX as a replacement for Neutron networking nodes.

In this case, once your vSphere environment is imported, users can fire up more compute instances, terminate ones they have, allocate more Cinder volumes, etc. all from an AWS-like management portal.  It’s as close to an AWS console as I have seen.

Platform9 also works for KVM environments, that is you can import currently running KVM environments into OpenStack and run them from their portal.

Makes OpenStack, almost easy to run/use/operate

Historically, the problem with OpenStack was its user interface. Platform9 solves this problem and makes it easy to import, use, and deploy VMware and KVM environments into an OpenStack framework. Once there, users and administrators have the same level of control that AWS and Microsoft Azure users have, i.e., fire up compute instances, allocate storage volumes and attach the two together, terminate the compute activities, detach the volumes and repeat, all in your very own private cloud.

Bare metal OpenStack support too

If you don’t have a current KVM or VMware environment, Platform9 will deploy a KVM virtualization environment on bare metal servers and storage and use that for your OpenStack cloud.

Security comes from tenant attributes, certain tenants have access and control over certain compute/storage/networking instances.

Customers can also use Platform9 as a replacement for vCenter, and once deployed under OpenStack, tenants/users have control over their segments of the private cloud deployment.

It handles multiple vSphere & KVM clusters as well and can also handle mixed virtualization environments within the same OpenStack cloud.

A few things missing

The only things I found missing from the Platform9 solution was Swift Object storage support and support for Hyper-V environments.

The Platform9 team mentioned that multi-region support was scheduled to come out this week, so then your users could fire up compute and storage instances across your world wide data centers, all from a single Platform9 management portal.

Pricing for the Platform9 service is on a socket basis, with volume pricing available for larger organizations.

If you are interested in a private cloud and are considering  OpenStack in order to avoid vendor lock-in, I would find it hard not to give Platform9 a try.

While at Dell


Later in the week, at TFD10 we talked with Dell, and they showed off their new VRTX Server product. Dell’s VRTX server is a very quiet, 4-server, 48TB tower or rackmount enclosure, which would make a very nice 8 or 16 socket CPU, private cloud for my home office environment (the picture doesn’t do it justice). And with a Platform9 control plane, I could offer OpenStack cloud services out of my home office, to all my neighbors around the world, for a fair but high price…

Comments?

 

VMware VVOLs potential performance problems

We discussed vSphere 6 VVOLs (Virtual Volumes) on this month’s GreyBeardsonStorage (GBoS) podcast with Howard Marks (@DeepStorageNet) and Satyam Vaghani (@SatyamVaghani, “Father of VVOLs”, CoFounder & CTO of PernixData).

VVOLs queue depth problem?

One performance problem from my perspective is that all VVOL FC IO is now funeled through a single Protocol Endpoint (PE) LUN for a single storage system. There may be some potential queue depth issues, but Satyam and Howard both said that queue depths have been greatly increased over the last decade or so and this shouldn’t be a problem, as long as you’re configured properly.

What about VVOL PEs on ALUA storage?

In an ALUA (Asymmetrical Logical Unit Access) Active/Passive, dual controller storage system, a set of LUNs is assigned to  one controller, the “active” side of an Active/Passive ALUA storage system. Many ALUA vendors now support “Active/Active” configurations such that 1/2 the LUNs are assigned to one side and the other 1/2  assigned to the other sider, for an Active/Passive & Passive/Active pair or Active/Active configuration.

So, ALUA storage systems have a LUN “allegiance” to a controller. If this continues to be the case under VVOLs,  then a PE would only be processed by one side of an ALUA dual controller system, effectively reducing the horse power to process VVOL IO to 1/2 of an ALUA storage system.

Now just because there is a LUN allegiance in ALUA storage doesn’t necessarily mean that all internal IO processing for a LUN is done on only one controller. But historically that has been the case. For instance, during an ALUA system non-disruptive code update, an “active” ALUA side must “failover” its LUNs to the other side to provide continuous IO activity, while the formerly active ALUA side taken down and updated with new code.

Potential solutions to ALUA PE performance?

One way to get around the VVOL ALUA performance problem is to have multiple PEs in a single storage system for the same vSphere Cluster VVOLs. I don’t know anything that would inhibit a storage system from supporting multiple PEs today, they already need to support multiple PEs for multiple vSphere clusters. Also, a VMware vSphere cluster must support multiple PEs for multiple storage systems.

I am also not aware of any VASA 2.0 requirement that restricts the number of PEs for a storage system’s support of a single vSphere cluster. But I could be mistaken here. So there should be nothing to inhibit multiple PEs from the same ALUA storage system to the same vSphere cluster.

Of course, this means an ALUA storage VVOLs would need to be divided across ALUA PEs.

Another solution is to eliminate any LUN allegiance for ALUA controllers. This requires shared memory between controllers to hold IO state and this is what non-ALUA storage does already.

~~~~

It’s just like Howard said on the GBoS podcast, “there’s going to be good and bad implementations of VVOLs” and telling the difference between the two will need to be done.

Comments?

 

Photo Credit(s): Passport Please by Oren Levine

VMworld 2014 projects Marvin, Mystic, and more

IMG_2902[This post was updated after being published to delete NDA material – sorry, RL] Attended VMworld2014 in San Francisco this past week. Lots of news, mostly about vSphere 6 beta functionality and how the new AirWatch acquisition will be rolled into VMware’s End-User Computing framework.

vSphere 6.0 beta

Virtual Volumes (VVOLs) is in beta and extends VMware’s software-defined storage model to external NAS and SAN storage.  VVOLs transforms SAN/NAS  storage into VM-centric devices by making the virtual disk a native representation of the VM at the array level, and enables app-centric, policy-based automation of SAN and NAS based storage services, somewhat similar to the capabilities used in a more limited fashion by Virtual SAN today.

Storage system features have proliferated and differentiated over time and to be able to specify and register any and all of these functional nuances to VMware storage policy based management (SPBM) service is a significant undertaking in and of itself. I guess we will have to wait until it comes out of beta to see more. NetApp had a functioning VVOL storage implementation on the show floor.

Virtual SAN 1.0/5.5 currently has 300+ customers with 30+ ready storage nodes from all major vendors, There are reference architecture documents and system bundles available.

Current enhancements outside of vSphere 6 beta

vRealize Suite extends automation and monitoring support for a broad mix of VMware and non VMware infrastructure and services including OpenStack, Amazon Web Services, Azure, Hyper-V, KVM, NSX, VSAN and vCloud Air (formerly vCloud Hybrid Services), as well as vSphere.

New VMware functionality being released:

  • vCenter Site Recovery Manager (SRM) 5.8 – provides self service DR through vCloud Automation Center (vRealize Automation) integration, with up to 5000 protected VMs per vCenter and up to 2000 VM concurrent recoveries. SRM UI will move to be supported under vSphere’s Web Client.
  • vSphere Data Protection Advanced 5.8 – provides configurable parallel backups (up to 64 streams) to reduce backup duration/shorten backup windows, access and restore backups from anywhere, and provides support for Microsoft Exchange DAGs, and SQL Clusters, as well as Linux LVMs and EXT4 file systems.

VMware NSX 6.1 (in beta) has 150+ customers and provides micro segmentation security levels which essentially supports fine grained security firewall definitions almost at the VM level, there are over 150 NSX customers today.

vCloud Hybrid Cloud Services is being rebranded as vCloud Air, and is currently available globally through data centers in the US, UK, and Japan. vCloud Air is part of the vCloud Air Network, an ecosystem of over 3,800 service providers with presence in 100+ countries that are based on common VMware technology.  VMware also announced a number of new partnerships to support development of mobile applications on vCloud Air.  Some additional functionality for vCloud Air that was announced at VMworld includes:

  • vCloud Air Virtual Private Cloud On Demand beta program supports instant, on demand consumption model for vCloud services based on a pay as you go model.
  • VMware vCloud Air Object Storage based on EMC ViPR is in beta and will be coming out shortly.
  • DevOps/continuous integration as a service, vRealize Air automation as a service, and DB as a service (MySQL/SQL server) will also be coming out soon

End-User Computing: VMware is integrating AirWatch‘s (another acquisition) enterprise mobility management solutions for mobile device management/mobile security/content collaboration (Secure Content Locker) with their current Horizon suite for virtual desktop/laptop support. VMware End User Computing now supports desktop/laptop virtualization, mobile device management and security, and content security and file collaboration. Also VMware’s recent CloudVolumes acquisition supports a light weight desktop/laptop app deployment solution for Horizon environments. AirWatch already has a similar solution for mobile.

OpenStack, Containers and other collaborations

VMware is starting to expand their footprint into other arenas, with new support, collaboration and joint ventures.

A new VMware OpenStack Distribution is in beta now to be available shortly, which supports VMware as underlying infrastructure for OpenStack applications that use  OpenStack APIs. VMware has become a contributor to OpenStack open source. There are other OpenStack distributions that support VMware infrastructure available from HP, Cannonical, Mirantis and one other company I neglected to write down.

VMware has started a joint initiative with Docker and Pivotal to broaden support for Linux containers. Containers are light weight packaging for applications that strip out the OS, hypervisor, frameworks etc and allow an application to be run on mobile, desktops, servers and anything else that runs Linux O/S (for Docker Linux 3.8 kernel level or better). Rumor has it that Google launches over 15M Docker containers a day.

VMware container support expands from Pivotal Warden containers, to now also include Docker containers. VMware is also working with Google and others on the Kubernetes project which supports container POD management (logical groups of containers). In addition Project Fargo is in development which is VMware’s own lightweight packaging solution for VMs. Now customers can run VMs, Docker containers, or Pivotal (Warden) containers on the same VMware infrastructure.

AT&T and VMware have a joint initiative to bring enterprise grade network security, speed and reliablity to vCloud Air customers which essentially allows customers to use AT&T VPNs with vCloud Air. There’s more to this but that’s all I noted.

VMware EVO, the next evolution in hyper-convergence has emerged.

  • EVO RAIL (formerly known as project Marvin) is appliance package from VMware hardware partners that runs vSphere Suite and Virtual SAN and vCenter Log Insight. The hardware supports 4 compute/storage nodes in a 2U tall rack mounted appliance. 4 of these appliances can be connected together into a cluster. Each compute/storage node supports ~100VMs or ~150 virtual desktops. VMware states that the goal is to have an EVO RAIL implementation take at most 15 minutes from power on to running VMs. Current hardware partners include Dell, EMC (formerly named project Mystic), Inspur (China), Net One (Japan), and SuperMicro.
  • EVO RACK is a data center level hardware appliance with vCloud Suite installed and includes Virtual SAN and NSX. The goal is for EVO RACK hardware to support a 2hr window from power on to a private cloud environment/datacenter deployed and running VMs. VMware expects a range of hardware partners to support EVO RACK but none were named. They did specifically mention that EVO RACK is intended to support hardware from the Open Compute Project (OCP). VMware is providing contributions to OCP to facilitate EVO RACK deployment.

~~~~

Sorry about the stream of consciousness approach to this. We got a deep dive on what’s in vSphere 6 but it was all under NDA. So this just represents what was discussed openly in keynotes and other public sessions.

Comments?

 

VMworld first thoughts kickoff session

[Edited for readability. RLL] The drummer band was great at the start but we couldn’t tell if it was real or lipsynched. It turned out that each of the Big VMWORLD letters had a digital drum pad on them which meant it was live, in realtime.

Paul got a standing ovation as he left the stage introducing Pat the new CEO.  With Paul on the stage, there was much discussion of where VMware has come the last four years.  But IDC stats probably say it better than most in 2008 about 25% of Intel X86 apps were virtualized and in 2012 it’s about 60% and and Gartner says that VMware has about 80% of that activity.

Pat got up on stage and it was like nothing’s changed. VMware is still going down the path they believe is best for the world a virtual data center that spans private, on premises equipment and extrenal cloud service providers equipment.

There was much ink on software defined data center which is taking the vSphere world view and incorporating networking, more storage, more infrastructure to the already present virtualized management paradigm.

It’s a bit murky as to what’s changed, what’s acquired functionality and what’s new development but suffice it to say that VMware has been busy once again this year.

A single “monster vm” (has it’s own facebook page) now supports up to 64 vCPUs, 1TB of RAM, and can sustain more than a million IOPS. It seems that this should be enough for most mission critical apps out there today. No statement on latency the IOPS but with a million IOS a second and 64 vCPUs we are probably talking flash somewhere in the storage hierarchy.

Pat mentioned that the vRAM concept is now officially dead. And the pricing model is now based on physical CPUs and sockets. It no longer has a VM or vRAM component to it. Seemed like this got lots of applause.

There are now so many components to vCloud Suite that it’s almost hard to keep track of them all:  vCloud Director, vCloud Orchestrator, vFabric applications director, vCenter Operations Manager, of course vSphere and that’s not counting relatively recent acquisitions Dynamic Op’s a cloud dashboard and Nicira SDN services and I am probably missing some of them.

In addition to all that VMware has been working on Serengeti which is a layer added to vSphere to virtualize Hadoop clusters. In the demo they spun up and down a hadoop cluster with MapReduce operating to process log files.  (I want one of these for my home office environments).

Showed another demo of the vCloud suite in action spinning up a cloud data center and deploying applications to it in real time. Literally it took ~5minutes to start it up until they were deploying applications to it.  It was a bit hard to follow as it was going a lot into the WAN like networking environment configuration of load ballancing, firewalls and other edge security and workload characteristics but it all seemed pretty straightforward and took a short while but configured an actual cloud in minutes.

I missed the last part about social cast but apparently it builds a social network of around VMs?  [Need to listen better next time]

More to follow…

 

Why virtualize now?

HP servers at School of Electrical Engineering, University of Belgrade
HP servers by lilit
I suppose it’s obvious to most analyst why server virtualization is such a hot topic these days. Most IT shops purchase servers today that are way overpowered and can easily execute multiple applications. Today’s overpowered servers are wasted running single applications and would easily run multiple applications if only an operating system could run them together without interference.

Enter virtualization, with virtualization hypervisors can run multiple applications concurrently and sometimes simultaneously on the same hardware server without compromising application execution integrity. Multiple virtual machine applications execute on a single server under a hypervisor that isolates the applications from one another. Thus, they all execute together on the same hardware without impacting each other.

But why doesn’t the O/S do this?

Most computer purists would say why not just run the multiple applications under the same operating system. But operating systems that run servers nowadays weren’t designed to run multiple applications together and as such, also weren’t designed to isolate them properly.

Virtualization hypervisors have had a clean slate to execute and isolate multiple application. Thus, virtualization is taking over the data center floor. As new servers come in, old servers are retired and the applications that used to run on them are consolidated on fewer and fewer physical servers.

Why now?

Current hardware trends dictate that each new generation of server has more processing power and oftentimes, more processing elements than previous generations. Today’s applications are getting more sophisticated but even with added sophistication, they do not come close to taking advantage of all the processing power now available. Hence, virtualization wins.

What seems to be happening nowadays is that while data centers started out consolidating tier 3 applications through virtualization, now they are starting to consolidate tier 2 applications and tier 1 apps are not far down this path. But, tier 2 and 1 applications require more dedicated services, more processing power, more deterministic execution times and thus, require more sophisticated virtualization hypervisors.

As such, VMware and others are responding by providing more hypervisor sophistication, e.g., more ways to dedicate and split up processing, networking and storage available to the physical server for virtual machine or application dedicated use. Thus preparing themselves for a point in the not to distant future when tier 1 applications run with all the comforts of a dedicated server environment but actually execute with other VMs in a single physical server.

VMware vSphere

We can see the start of this trend with the latest offering from VMware, vSphere. This product now supports more processing hardware, more networking options and stronger storage support. vSphere also can dedicate more processing elements to virtual machines. Such new features make it easier to support tier 2 today and tier 1 applications sometime in future.