Western Digital at SFD15: ActiveScale object storage

Phill Bullinger and his staff from Western Digital presented at Storage Field Day 15 (SFD15) on a number of their enterprise products including Tegile and IntelliFlash but the one that caught my interest was their ActiveScale object store acquired from Amplidata back in 2015.

ActiveScale is an onprem, object storage system that provides cloud-like  economics for customer data.

ActiveScale Hardware

ActiveScale systems can both scale up and scale out within a single site. ActiveScale systems have both  storage and system nodes. Storage nodes perform erasure coding and System nodes are control points and metadata managers for the object store.

ActiveScale comes in two appliance configurations that contain both storage and system nodes and storage required.  The two appliances are:

  • ActiveScale P100 is a 7U 720TB pod system and A full rack of P100s can read 8GB/sec and can have 17-9s data availability. The P100 can scale up to 2.1PB in a single rack and up to 18PB in the same namespace. The P100 is a higher performing solution with better performing storage and system nodes
  • ActiveScale X100 is a 42U rack scale solution that holds up to 588 12TB drives or 5.8PB per rack. The X100 can scale up to 9 racks or 52PB in the same namespace. The X100 is a denser configuration with only 6 storage nodes and as such, has a better $/GB than the P100 above.

As WDC is both the supplier of the ActiveScale appliance and a supplier of disk storage they can be fairly aggressive with pricing on appliance systems.

Data integrity in ActiveScale

They make a point of saying that ActiveScale object metadata and data are stored separately. By separating data and metadata, they claim to be  more resilient to system failures. Object metadata is 3 way replicated, in a replicated database, residing in system nodes. Other object systems often store metadata and object data in the same way.

Object data can be erasure coded. That is, object data is chunked, erasure coding protected and then spread across multiple disk drives for data protection. ActiveScale erasure coding is called BitSpread. With BitSpread customers identify the number of disk drives to spread object data across and the number of drive failures the system should recover from without data loss.

A typical BitSpread configuration splits object data into 18 chunks and spreads these chunks across storage columns. A storage column is from 6-18 storage nodes. There’s no pre-allocated space in BitSpread. Object data chunks are allocated to disk storage based on current capacity and performance of the system, within redundancy constraints.

In addition, ActiveScale has a background task called BitDynamics that scans  erasure coded chunks and does a mathematical health check on the object data. If a chunk is bad, the object data chunk can be recovered and re-erasure coded back to proper health.

WDC performance testing shows that BitDynamics has 0 performance degradation when performing re-erasure coding. Indeed, they took out 98 drives in an ActiveScale cluster and BitDynamics re-coded all that data onto other disk drives and detected no performance impact. No indication how long  re-encoding 98 disk drives of data took nor the % of object store capacity utilization at the time of the test but presumably there’s a report someplace to back this up

Unlike many public cloud based object storage systems, ActiveScale is strongly consistent. That is object puts (writes) are not responded back to the entity doing the put,  until the object metadata and object data are properly and safely recorded in the object store.

ActiveScale also supports 3 site erasure coding. GeoSpread is their approach to erasure coding across sites. In this case, object metadata is replicated across 3 system nodes across the sites. Object data and erasure coded information is split into 20 chunks which are then spread across the three sites.  This way if any one site goes down, the other two sites have sufficient metadata, object data chunks and erasure coded information to reconstruct the data.

ActiveScale 5.2 now supports asynch replication. That is any one ActiveScale cluster can replicate to any other ActiveScale cluster located continent distances away.

Unclear how GeoSpread and asynch replication would interact together, but my guess is that each of the 3 GeoSpread sites could be asynchronously replicated to 3 other sites for maximum redundancy.

Both GeoSpread and ActiveScale replication impact performance,  depending on how far the sites are from one another and the speed and bandwidth of the links between sites.

ActiveScale markets

ActiveScale’s biggest market is media and entertainment (M&E), mostly used for media archive or tape replacement/augmentation. WDC showed one customer case study for the Montreaux Jazz Festival, which migrated 49 years of performance videos up to ActiveScale and can now stream any performance, on request, without delay. Montreax media is GeoSpread across 3 sites in France. Another option is to perform transcoding on the object media in realtime and stream the transcoded media.

Another large market is Bio/Life Sciences. Medical & biological scanners are transitioning to higher resolution scans which take more data space. And this sort of medical information needs to be kept a long time

Data analytics on ActiveScale

One other emerging market is data analytics. With the new S3A (S3 adapter), Hadoop clusters can now support object storage as a 2nd tier. One problem with data analytics is that they have lots of data and storing it in triplicate, costs an awful lot.

In big data world, datasets can get very large very quickly. Indeed PB sizes data sets aren’t that unusual. And with triple replication (in native HDFS). When HDFS runs out of space you have to delete data. Before S3A, the only way you could increase storage you had to scale out (with compute and storage and networking) in order to add capacity.

Using Hadoop’s S3A, ActiveScale’s can provide cold archive for data analytics.  From a Hadoop user/application perspective, S3A ActiveScale storage looks like just another directory under HDFS (Hadoop Data File System). You can run MapReduce or other Hadoop application directly against object buckets. But a more realistic approach is to move inactive or cold data from an disk resident HDFS directory to a S3A directory

HDFS and MapReduce are tightly coupled and were designed to have data close to where computation happens. So,  as long as the active data or working set data is on HDFS disk storage or directly in memory the rest of the (inactive) data could all be placed on S3A object storage. Inactive data is normally historical data no longer being actively analyzed while newer data would be actively analyzed. Older, inactive data can be manually or automatically archived off to S3A. With HIVE you can partition your database to have active data in HDFS disk storage and inactive data in S3A.

Another approach is if the active, working set data can all fit directly in memory then the data can reside on S3A object storage. This way the data is read from S3A storage into memory, analyzed there and output be done back to object store or HDFS disk. Because the data is only read (loaded) once, there’s only a minimal performance penalty to use S3A storage.

Western Digital is an active contributor to Hadoop S3A and have recently added performance improvements to S3A, such as better caching, partial object reading, and core XML performance tuning options.

~~~~
If your interested in learning more about Western Digital ActiveScale, check out the videos referenced earlier and their website.

Also you may be interested in these other posts on the WD sessions at SFD15:

The A is for Active, The S is for Scale by Dan Firth (@PenguinPunk)

Comments?

Random access, DNA object storage system

Read a couple of articles this week Inching closer to a DNA-based file system in ArsTechnica and DNA storage gets random access in IEEE Spectrum. Both of these seem to be citing an article in Nature, Random access in large-scale DNA storage (paywall).

We’ve known for some time now that we can encode data into DNA strings (see my DNA as storage … and Genomic informatics takes off posts).

However, accessing DNA data has been sequential and reading and writing DNA data has been glacial. Researchers have started to attack the sequentiality of DNA data access. The prize, DNA can store 215PB of data in one gram and DNA data can conceivably last millions of years.

Researchers at Microsoft and the University of Washington have come up with a solution to the sequential access limitation. They have used polymerase chain reaction (PCR) primers as a unique identifier for files. They can construct a complementary PCR primer that can be used to extract just DNA segments that match this primer and amplify (replicate) all DNA sequences matching this primer tag that exist in the cell.

DNA data format

The researchers used a Reed-Solomon (R-S) erasure coding mechanism for data protection and encode the DNA data into many DNA strings, each with multiple (metadata) tags on them. One of tags is the PCR primer tag header, another tag indicates the position of the DNA data segment in the file and an end of data tag that is the same PCR primer tag.

The PCR primer tag was used as sort of a file address. They could configure a complementary PCR tag to match the primer tag of the file they wanted to access and then use the PCR process to replicate (amplify) only those DNA segments that matched the searched for primer tag.

Apparently the researchers chunk file data into a block of 150 base pairs. As there are 2 complementary base pairs, I assume one bit to one base pair mapping. As such, 150 base pairs or bits of data per segment means ~18 bytes of data per segment. Presumably this is to allow for more efficient/effective encoding of data into DNA strings.

DNA strings don’t work well with replicated sequences of base pairs, such as all zeros. So the researchers created a random sequence of 150 base pairs and XOR the file DNA data with this random sequence to determine the actual DNA sequence to use to encode the data. Reading the DNA data back they need to XOR the data segment with the random string again to reconstruct the actual file data segment.

Not clear how PCR replicated DNA segments are isolated and where they are originally decoded (with a read head). But presumably once you have thousands to millions of copies of a DNA segment,  it’s pretty straightforward to decode them.

Once decoded and XORed, they use the R-S erasure coding scheme to ensure that the all the DNA data segments represent the actual data that was encoded in them. They can then use the position of the DNA data segment tag to indicate how to put the file data back together again.

What’s missing?

I am assuming the cellular data storage system has multiple distinct cells of data, which are clustered together into some sort of organism.

Each cell in the cellular data storage system would hold unique file data and could be extracted and a file read out individually from the cell and then the cell could be placed back in the organism. Cells of data could be replicated within an organism or to other organisms.

To be a true storage system, I would think we need to add:

  • DNA data parity – inside each DNA data segment, every eighth base pair would be a parity for the eight preceding base pairs, used to indicate when a particular base pair in eight has mutated.
  • DNA data segment (block) and file checksums –  standard data checksums, used to verify and correct for double and triple base pair (bit) corruption in DNA data segments and in the whole file.
  • Cell directory – used to indicate the unique Cell ID of the cell, a file [name] to PCR primer tag mapping table, a version of DNA file metadata tags, a version of the DNA file XOR string, a DNA file data R-S version/level, the DNA file length or number of DNA data segments, the DNA data creation data time stamp, the DNA last access date-time stamp,and DNA data modification data-time stamp (these last two could be omited)
  • Organism directory – used to indicate unique organism ID, organism metadata version number, organism unique cell count,  unique cell ID to file list mapping, cell ID creation data-time stamp and cell ID replication count.

The problem with an organism cell-ID file list is that this could be quite long. It might be better to somehow indicate a range or list of ranges of PCR primer tags that are in the cell-ID. I can see other alternatives using a segmented organism directory or indirect organism cell to file lists b-tree, which could hold file name lists to cell-ID mapping.

It’s unclear whether DNA data storage should support a multi-level hierarchy, like file system  directories structures or a flat hierarchy like object storage data, which just has buckets of objects data. Considering the cellular structure of DNA data it appears to me more like buckets and the glacial access seems to be more useful to archive systems. So I would lean to a flat hierarchy and an object storage structure.

Is DNA data is WORM or modifiable? Given the effort required to encode and create DNA data segment storage, it would seem it’s more WORM like than modifiable storage.

How will the DNA data storage system persist or be kept alive, if that’s the right word for it. There must be some standard internal cell mechanisms to maintain its existence. Perhaps, the researchers have just inserted file data DNA into a standard cell as sort of junk DNA.

If this were the case, you’d almost want to create a separate, data  nucleus inside a cell, that would just hold file data and wouldn’t interfere with normal cellular operations.

But doesn’t the PCR primer tag approach lend itself better to a  key-value store data base?

Photo Credit(s): Cell structure National Cancer Institute

Prentice Hall textbook

Guide to Open VMS file applications

Unix Inodes CSE410 Washington.edu

Key Value Databases, Wikipedia By ClescopOwn work, CC BY-SA 4.0, Link

Hedvig storage system, Docker support & data protection that spans data centers

Hedvig003We talked with Hedvig (@HedvigInc) at Storage Field Day 10 (SFD10), a month or so ago and had a detailed deep dive into their technology. (Check out the videos of their sessions here.)

Hedvig implements a software defined storage solution that runs on X86 or ARM processors and depends on a storage proxy operating in a hypervisor host (as a VM) and storage service nodes. Their proxy and the storage services can execute as separate VMs on the same host in a hyper-converged fashion or on different nodes as a separate storage cluster with hosts doing IO to the storage cluster.

Hedvig’s management team comes from hyper-scale environments (Amazon Dynamo/Facebook Cassandra) so they have lots of experience implementing distributed software defined storage at (hyper-)scale.
Continue reading “Hedvig storage system, Docker support & data protection that spans data centers”

Has triple parity Raid time come?

Data center with hard drives
Data center with hard drives

Back at SFD10 a couple of weeks back now when visiting with Nimble Storage they mentioned that their latest all flash storage array was going to support triple-parity RAID.

And last week at a NetApp-SolidFire analyst event, someone mentioned that the new ONTAP 9 triple parity RAID-TEC™ for larger SSDs. Also heard at the meeting was that a 15.3TB SSD would take on the order of 12 hours to rebuild.

Need for better protection

When Nimble discussed the need for triple parity RAID they mentioned the report from Google I talked about recently (see my Surprises from 4 years of SSD experience at Google post). In that post, the main surprise was the amount of read errors they had seen from the SSDs they deployed throughout their data center.

I think the need for triple-parity RAID and larger (+15TB SSDs) will become more common over time. There’s no reason to think that the SSD vendors will stop at 15TB. And if it takes 12 hours to rebuild a 15TB one, I think it’s probably something like  ~30 hours to rebuild a 30TB one, which is just a generation or two away.

A read error on one SSD in a RAID group during an SSD rebuild can be masked by having dual parity. A read error on two SSDs can only be masked by having triple parity RAID.
Continue reading “Has triple parity Raid time come?”

A tale of two AFAs: EMC DSSD D5 & Pure Storage FlashBlade

There’s been an ongoing debate in the analyst community about the advantages of software only innovation vs. hardware-software innovation (see Commodity hardware loses again and Commodity hardware always loses posts). Here is another example where two separate companies have turned to hardware innovation to take storage innovation to the next level.

DSSD D5 and FlashBlade

DSSD-d5Within the last couple of weeks, two radically different AFAs were introduced. One by perennial heavyweight EMC with their new DSSD D5 rack scale flash system and the other by relatively new comer Pure Storage with their new FlashBlade storage system.FB

These two arrays seem to be going after opposite ends of the storage market: the 5U DSSD D5 is going after both structured and unstructured data that needs ultra high speed IO access (<100µsec) times and the 4U FlashBlade going after more general purpose unstructured data. And yet the two have have many similarities at least superficially.
Continue reading “A tale of two AFAs: EMC DSSD D5 & Pure Storage FlashBlade”

Data virtualization surfaces

There’s a new storage startup out of stealth, called Primary Data and it’s implementing data (note, not storage) virtualization.

They already have $60M in funding with some pretty highpowered talent from Fusion IO, namely David Flynn, Rick White and Steve Wozniak (the ‘Woz’)  (also of Apple fame).

There have been a number of attempts at creating a virtualization layers for data namely ViPR (See my post ViPR virtues, vexations but no storage virtualization) but Primary Data is taking a different tack to the problem.

Data virtualization explained

Data hypervisor, software defined storage, data plane, control plane
(c) 2012 Silverton Consulting, Inc. All rights reserved

Essentially they want to separate the data plane from the control plane (See my Data Hypervisor post and comments for another view on this).

  • The data plane consists of those storage system activities that actually perform IO or read and writes.
  • The control plane is those storage system activities that do everything else that has to be done by a storage system, including provisioning, monitoring, and managing the storage.

Separating the data plane from the control plane offers a number of advantages. EMC ViPR does this but it’s data plane is either standard storage systems like VMAX, VNX, Isilon etc, or software defined storage solutions. Primary Data wants to do it all.

Their meta data or control plane engine is called a Data Director which holds information about the data objects that are stored in the Primary Data system, runs a data policy management engine and handles data migration.

Primary Data relies on purpose-built, Data Hypervisor (client) software that talks to Data Directors to understand where data objects reside and how to go about accessing them. But once the metadata information is transferred to the client SW, then IO activity can go directly between the host and the storage system in a protocol independent fashion.

[The graphic above is from my prior post and I assumed the data hypervisor (DH) would be co-located with the data but Primary Data has rightly implemented this as a separate layer in host software.]

Data Hypervisor protocol independence?

As I understand it this means that customers could use file storage, object storage or block storage to support any application requirement. This also means that file data (objects) could be migrated to block storage and still be accessed as file data. But the converse is also true, i.e., block data (objects) could be migrated to file storage and still be accessed as block data. You need to add object, DAS, PCIe flash and cloud storage to the mix to see where they are headed.

All data in Primary Data’s system are object encapsulated and all data objects are catalogued within a single, global namespace that spans file, block, object and cloud storage repositories

Data objects can reside on Primary storage systems, external non-Primary data aware file or block storage systems, DAS, PCIe Flash, and even cloud storage.

How does Data Virtualization compare to Storage Virtualization?

There are a number of differences:

  1. Most storage virtualization solutions are in the middle of the data path and because of this have to be fairly significant, highly fault-tolerant solutions.
  2. Most storage virtualization solutions don’t have a separate and distinct meta-data engine.
  3. Most storage virtualization systems don’t require any special (data hypervisor) software running on hosts or clients.
  4. Most storage virtualization systems don’t support protocol independent access to data storage.
  5. Most storage virtualization systems don’t support DAS or server based, PCIe flash for permanent storage. (Yes this is not supported in the first release but the intent is to support this soon.)
  6. Most storage virtualization systems support internal storage that resides directly inside the storage virtualization system hardware.
  7. Most storage virtualization systems support an internal DRAM cache layer which is used to speed up IO to internal and external storage and is in addition to any caching done at the external storage system level.
  8. Most storage virtualization systems only support external block storage.

There are a few similarities as well:

  1. They both manage data migration in a non-disruptive fashion.
  2. They both support automated policy management over data placement, data protection, data performance, and other QoS attributes.
  3. They both support multiple vendors of external storage.
  4. They both can support different host access protocols.

Data Virtualization Policy Management

A policy engine runs in the Data Directors and provides SLAs for data objects. This would include performance attributes, protection attributes, security requirements and cost requirements.  Presumably, policy specifications for data protection would include RAID level, erasure coding level and geographic dispersion.

In Primary Data, backup becomes nothing more than object snapshots with different protection characteristics, like offsite full copy. Moreover, data object migration can be handled completely outboard and without causing data access disruption and on an automated policy basis.

Primary Data first release

Primary Data will be initially deployed as an integrated data virtualization solution which includes an all flash NAS storage system and a standard NAS system. Over time, Primary Data will add non-Primary Data external storage and internal storage (DAS, SSD, PCIe Flash).

The Data Policy Engine and Data Migrator functionality will be separately charged for software solutions. Data Directors are sold in pairs (active-passive) and can be non-disruptively upgraded. Storage (directors?) are also sold separately.

Data Hypervisor (client) software is available for most styles of Linux, Openstack and coming for ESX. Windows SMB support is not split yet (control plane/data plane) but Primary data does support SMB. I believe the Data Hypervisor software will also be released in an upcoming version of the Linux kernel.

They are currently in testing. No official date for GA but they did say they would announce pricing in 2015.

~~~~

Comments?

Disclosure: We have done work for Primary Data over the past year.

Photo Credits:

  1. Screen shot of beta test system supplied by Primary Data
  2. Graphic created by SCI for prior Data Hypervisor post

EMC acquisitions & other announcements at #EMCSummit last week

EMC’s recently announced the acquisition of Maginetics and Spanning, which are mainly for pushing data protection out to cloud storage and cloud storage data protection. The other major item to come out of EMC Global Analyst Summit last week was the announcement of EMC’s Hybrid Cloud Solution.

Magnetic for cloud onramp

Maginetics is intended to be yet another tier in the deep archive for Avatar, Data Domain and NetWorker but this one is intended to be in the cloud. MagFS Maginetic’s file system manages the file to object transition, provides their own deduplication and can replicate data to one or more cloud providers, even supporting different cloud services such as, AWS, Google Compute, Azzure, CleverSafe etc. I think ultimately this may be broadened beyond just data protection to be another tier for Unified storage to move to the cloud as well but that’s subject for another post.

How does Maginetics differs from another recent EMC acquisition, TwinStrata? TwinStrata is mainly targeted for primary storage moving a LUN to the cloud and maintaining data avaliability and something like reasonable responsiveness to the data that’s moved to the cloud. So where Maginetics is for data protection storage TwinStrata is for primary storage. Unclear where this leaves other file storage…

Spanning for protection of cloud data

Spanning is intended to be a data protection solution for  data that’s born in the cloud. In this case, you can use Spanning to backup your cloud applications to different cloud service providers or even the same cloud service providers. Even if you don’t want to use Spanning to backup your cloud app’s data, with a cloud version of Data Protection Advisor (based somewhat on Spanning), you should be ultimately able to use it to monitor your current cloud provider’s replication/protection activities to insure they are copying and backing up your data properly across data center domains etc. In this way you can better monitor your cloud providers internal data replication/protection services.

EMC seems like it’s got a vision of where it intends to go and the cloud represents a significant new potential data stream and they want to be there to help protect it and use it to help protect other data.

EMC’s Hybrid Cloud announcement

The main announcement at the summit was on EMC’s Hybrid Cloud Offering which was pre-announced at EMC World last spring. With their Hybrid Cloud Offering making it easier for data centers to take advantage of the cloud to burst applications back and forth, EMC’s trying to cover anyway to use the cloud that makes sense.

EMC announced that their Hybrid Cloud solution will support a “federation” hybrid cloud solution based on VCE/VBLOCK or VSPEX and a software defined version based on ViPR storage controller. They also made a statement of direction to have their Hybrid Cloud solution support Microsoft Azure as well as OpenStack at some point in the future…

Well I think that about covers it for EMC cloud announcements from the EMC Global Summit last week.

Comments?

VMworld 2014 projects Marvin, Mystic, and more

IMG_2902[This post was updated after being published to delete NDA material – sorry, RL] Attended VMworld2014 in San Francisco this past week. Lots of news, mostly about vSphere 6 beta functionality and how the new AirWatch acquisition will be rolled into VMware’s End-User Computing framework.

vSphere 6.0 beta

Virtual Volumes (VVOLs) is in beta and extends VMware’s software-defined storage model to external NAS and SAN storage.  VVOLs transforms SAN/NAS  storage into VM-centric devices by making the virtual disk a native representation of the VM at the array level, and enables app-centric, policy-based automation of SAN and NAS based storage services, somewhat similar to the capabilities used in a more limited fashion by Virtual SAN today.

Storage system features have proliferated and differentiated over time and to be able to specify and register any and all of these functional nuances to VMware storage policy based management (SPBM) service is a significant undertaking in and of itself. I guess we will have to wait until it comes out of beta to see more. NetApp had a functioning VVOL storage implementation on the show floor.

Virtual SAN 1.0/5.5 currently has 300+ customers with 30+ ready storage nodes from all major vendors, There are reference architecture documents and system bundles available.

Current enhancements outside of vSphere 6 beta

vRealize Suite extends automation and monitoring support for a broad mix of VMware and non VMware infrastructure and services including OpenStack, Amazon Web Services, Azure, Hyper-V, KVM, NSX, VSAN and vCloud Air (formerly vCloud Hybrid Services), as well as vSphere.

New VMware functionality being released:

  • vCenter Site Recovery Manager (SRM) 5.8 – provides self service DR through vCloud Automation Center (vRealize Automation) integration, with up to 5000 protected VMs per vCenter and up to 2000 VM concurrent recoveries. SRM UI will move to be supported under vSphere’s Web Client.
  • vSphere Data Protection Advanced 5.8 – provides configurable parallel backups (up to 64 streams) to reduce backup duration/shorten backup windows, access and restore backups from anywhere, and provides support for Microsoft Exchange DAGs, and SQL Clusters, as well as Linux LVMs and EXT4 file systems.

VMware NSX 6.1 (in beta) has 150+ customers and provides micro segmentation security levels which essentially supports fine grained security firewall definitions almost at the VM level, there are over 150 NSX customers today.

vCloud Hybrid Cloud Services is being rebranded as vCloud Air, and is currently available globally through data centers in the US, UK, and Japan. vCloud Air is part of the vCloud Air Network, an ecosystem of over 3,800 service providers with presence in 100+ countries that are based on common VMware technology.  VMware also announced a number of new partnerships to support development of mobile applications on vCloud Air.  Some additional functionality for vCloud Air that was announced at VMworld includes:

  • vCloud Air Virtual Private Cloud On Demand beta program supports instant, on demand consumption model for vCloud services based on a pay as you go model.
  • VMware vCloud Air Object Storage based on EMC ViPR is in beta and will be coming out shortly.
  • DevOps/continuous integration as a service, vRealize Air automation as a service, and DB as a service (MySQL/SQL server) will also be coming out soon

End-User Computing: VMware is integrating AirWatch‘s (another acquisition) enterprise mobility management solutions for mobile device management/mobile security/content collaboration (Secure Content Locker) with their current Horizon suite for virtual desktop/laptop support. VMware End User Computing now supports desktop/laptop virtualization, mobile device management and security, and content security and file collaboration. Also VMware’s recent CloudVolumes acquisition supports a light weight desktop/laptop app deployment solution for Horizon environments. AirWatch already has a similar solution for mobile.

OpenStack, Containers and other collaborations

VMware is starting to expand their footprint into other arenas, with new support, collaboration and joint ventures.

A new VMware OpenStack Distribution is in beta now to be available shortly, which supports VMware as underlying infrastructure for OpenStack applications that use  OpenStack APIs. VMware has become a contributor to OpenStack open source. There are other OpenStack distributions that support VMware infrastructure available from HP, Cannonical, Mirantis and one other company I neglected to write down.

VMware has started a joint initiative with Docker and Pivotal to broaden support for Linux containers. Containers are light weight packaging for applications that strip out the OS, hypervisor, frameworks etc and allow an application to be run on mobile, desktops, servers and anything else that runs Linux O/S (for Docker Linux 3.8 kernel level or better). Rumor has it that Google launches over 15M Docker containers a day.

VMware container support expands from Pivotal Warden containers, to now also include Docker containers. VMware is also working with Google and others on the Kubernetes project which supports container POD management (logical groups of containers). In addition Project Fargo is in development which is VMware’s own lightweight packaging solution for VMs. Now customers can run VMs, Docker containers, or Pivotal (Warden) containers on the same VMware infrastructure.

AT&T and VMware have a joint initiative to bring enterprise grade network security, speed and reliablity to vCloud Air customers which essentially allows customers to use AT&T VPNs with vCloud Air. There’s more to this but that’s all I noted.

VMware EVO, the next evolution in hyper-convergence has emerged.

  • EVO RAIL (formerly known as project Marvin) is appliance package from VMware hardware partners that runs vSphere Suite and Virtual SAN and vCenter Log Insight. The hardware supports 4 compute/storage nodes in a 2U tall rack mounted appliance. 4 of these appliances can be connected together into a cluster. Each compute/storage node supports ~100VMs or ~150 virtual desktops. VMware states that the goal is to have an EVO RAIL implementation take at most 15 minutes from power on to running VMs. Current hardware partners include Dell, EMC (formerly named project Mystic), Inspur (China), Net One (Japan), and SuperMicro.
  • EVO RACK is a data center level hardware appliance with vCloud Suite installed and includes Virtual SAN and NSX. The goal is for EVO RACK hardware to support a 2hr window from power on to a private cloud environment/datacenter deployed and running VMs. VMware expects a range of hardware partners to support EVO RACK but none were named. They did specifically mention that EVO RACK is intended to support hardware from the Open Compute Project (OCP). VMware is providing contributions to OCP to facilitate EVO RACK deployment.

~~~~

Sorry about the stream of consciousness approach to this. We got a deep dive on what’s in vSphere 6 but it was all under NDA. So this just represents what was discussed openly in keynotes and other public sessions.

Comments?