CommVault’s Simpana 9 release

CommVault annoucned a new release of their data protection product today – Simpana® 9.  The new software provides significantly enhanced support for VM backup, new source-level deduplication capabilities and other enhanced facilities.

Simpana 9 starts by defining 3 tiers of data protection based on their Snapshot Protection Client (SPC):

  • Recovery tier – using SPC application consistent hardware snapshots can be taken utilizing storage interfaces to create content aware granular level recovery.  Simpana 9 SPC now supports EMC, NetApp, HDS, Dell, HP, and IBM (including LSI) storage snapshot capabilities.  Automation supplied with Simpana 9 allows the user to schedule hardware snapshots at various intervals throughout the day such  that they can be used to recover data without delay.
  • Protection tier – using mounted snapshot(s) provided by SPC above, Simpana 9 can create an extract or physical backup set copy to any disk type (DAS, SAN, NAS) providing a daily backup for retention purposes. This data can be deduplicated and encrypted for increased storage utilization and data security.
  • Compliance tier – selective backup jobs can then be sent to cloud storage and/or archive appliances such as HDS HCP or Dell DX for long term retention and compliance, preserving CommVault’s deduplication and encryption.  Alternatively, compliance data can be sent to the cloud.  CommVault’s previous cloud storage support included Amazon S3, Microsoft Azure, Rackspace, Iron Mountain and Nirvanix, with Simpana 9, they now add EMC Atmos providers and Mezeo to the mix.

Simpana 9 VM backup support

Simpana 9 also introduces a SnapProtect Enable Virtual Server Agent (VSA) to speed up virtual machine datastore backups.  With VSA’s support for storage hardware snapshot backups and VMware facilities to provide application consistent backups, virtual server environments can now scale to 1000s of VMs without concern for backup’s processing and IO impact to ongoing activity.  VSA snapshots can be mounted afterwards to a proxy server and using VMware services extract file level content which CommVault can then data deduplicate, encrypt and offload to other media that allows for granular content recovery.

In addition, Simpana 9 supports auto-discovery of virtual machines with auto-assignment of data protection policies.  As such, VM guests can be automatically placed into an appropriate, pre-defined data protection regimen without the need for operator intervention after VM creation.

Also with all the meta-data content cataloguing, Simpana 9 now supplies a light weight file-oriented Storage Resources Manager capability via the CommVault management interface.  Such services can provide detailed file level analytics for VM data without the need for VM guest agents.

Simpana 9 new deduplication support

CommVault’s 1st gen deduplication with Simpana 7 was at the object level.  With Simpana 8 deduplication occured at the block level providing content aware variable block sizes and added software data encryption support for disk or tape backup sets.  With today’s release, Simpana 9 shifts some deduplication processing out to the source (the client) increasing backup data throughput by reducing data transfer. All this sounds similar to EMC’s Data Domain Boost capability introduced earlier this year .

Such a change takes advantage of the CommVault’s intelligent Data Agent (iDA) running in the clients to provide pre-deduplication hashing and list creation rather than doing this all at CommVault’s Media Agent node, reducing data to be transferred.  Further, CommVault’s data deduplication can be applied across a number of clients for a global deduplication service that spans remote clients as well as a central data center repositories.

Simpana 9 new non-CommVault backup reporting and migration capabilities

Simpana 9 provides a new data collector for NetBackup versions 6.0, 6.5, and 7.0 and TSM 6.1 which allows CommVault to discover other backup services in the environment, extract backup policies, client configurations, job histories, etc. and report on these foreign backup processes.  In addition, once their data collertor is in place, Simpana 9 also supports automated procedures that can roll out and convert all these other backup services to CommVault data protection over a weekend, vastly simplifying migration from non-CommVault to Simpana 9 data protection.

Simpana 9 new software licensing

CommVault is also changing their software licensing approach to include more options for capacity based licensing. Previously, CommVault supported limited capacity based licensing but mostly used CommVault architectural component level licensing.  Now, they have expanded the capacity licensing offerings and both licensing modes are available so the customer can select whichever approach proves best for them.  With CommVault’s capacity-based licensing, usage can be tracked on the fly to show when customers may need to purchase a larger capacity license.

Probably other enhancements I missed here as Simpana 9 was a significant changeover from Simpana 8. Nonetheless, this version’s best feature was their enhanced approach to VM backups, allowing more VMs to run on a single server without concern for backup overhead.  The fact that they do source-level pre-deduplication processing just adds icing to the cake.

What do you think?

VMworld 2010 review

The start of VMWorld2010's 1st keynote session
The start of VMWorld2010's 1st keynote session

Got back from VMworld last week and had a great time. Met a number of new and old friends and talked a lot about the new VMware technology coming online. Some highlights from the keynote sessions I attended,

vCloud Director

Previously known as Redwood, VMware is rolling out their support for cloud services and tieing it into their data center services. vCloud Director supports the definition of Virtual Data Centers with varying SLA characteristics. It is expected that virtual data centers would each support different service levels, something like “Gold”, “Silver” and “Bronze”. Virtual data centers now represent a class of VM service and aggregates all VMware data center resources together into massive resource pools which can now better managed and allocated to VMs that need them.

For example, by using vCLoud Director, one only needs to select which Virtual Data Center to specify the SLAs for a VM. New VMs will be allocated to the virtual data center that provides the requested service. This takes DRS, HA and FT to a whole new level.

Even more, it now allows vCloud Data Center Service partners to enter into the picture and provide a virtual data center class of service to the customer. In this way, a customer’s onsite data center could supply Gold and Silver virtual data center services while Bronze services could be provided at a service partner.

vShield

With all the advent of VM cloud capabilites coming online the need for VM security is becoming much more pressing. To address these concerns, VMware rolled out their vShield services which come in two levels today vShield Endpoint and vShield Edge.

  • Endpoint – offloads anti-virus scans from running in the VM and interfaces with standard anti-virus vendors to run the scan at the higher (ESX) levels.
  • Edge – provides for VPN and firewalls surrounding the virtual data center and interfaces with Cisco, Intel-McAffee, Symantec, and RSA to insure tight integration with these data center security providers.

The combination of vShield and vCloud Director allows approved vCloud Data Center Service providers to supply end-to-end data center security surrounding VMs and virtual data centers. Their are currently 5 approved vShield/vCloud Data Center Services partners today and they are Terramark, Verizon, Singtel, Colt, and Bluelock with more coming online shortly. Using vShield services, VMs could have secured access to onsite data center services even though they were executing offsite in the cloud.

VMware View

A new version of VMware’s VDI interface was released which now includes offline mode for those users that occasionally reside outside normal network access and need to use a standalone desktop environment. With the latest VMware View offline mode, one would checkout (download) a desktop virtual machine to your laptop and then be able to run all your desktop applications without network access.

 

vStorage API for Array Integration (VAAI)

VAAI supports advanced storage capabilities such as cloning, snapshot and thin provisioning and improves the efficiency of VM I/O. These changes should make thin provisioning much more efficient to use and should enable VMware to take advantage of storage hardware services such as snapshots and clones to offload VMware software services.

vSphere Essentials

Essentials is an SMB targeted VMware solution license-able for ~$18 per VM in an 8-core server, lowering the entry costs for VMware to very reasonable levels. The SMB data center’s number one problem is the lack of resources and this should enable more SMB shops to adopt VMware services at an entry level and grow up with VMware solutions in their environment.

VMforce

VMforce allows applications developed under Springsource, the enterprise java application development framework of the future, to run on the cloud via Salesforce.com cloud infrastructire. VMware is also working with Google and other cloud computing providers to provide similar services on their cloud infrastructure.

Other News

In addition to these feature/functionality announcements, VMware discussed their two most recent acquisitions of Integrien and TriCipher.

  • Integrien – is a both a visualization and resource analytics application. This will let administrators see at a glance how their VMware environment is operating with a dashboard and then allows one to drill down to see what is wrong with any items indicated by red or yellow lights. Integrien integrates with vCenter and other services to provide the analytics needed to determine resource status and details needed to understand how to resolve any flagged situation.
  • TriCipher – is a security service that will ultimately provide a single sign-on/login for all VMware services. As discussed above security is becoming ever more important in VMware environments and separate sign-ons to all VMware services would be cumbersome at best. However, with TriCipher, one only need sign-on once and then have access to any and all VMware services in a securely authenticated fashion.

VMWorld Lowlights

Most of these are nits and not worth dwelling on but the exhibitors and other non-high level sponsors/exhibitors all seemed to complain about the lack of conference rooms and were not allowed in the press&analyst rooms. Finding seating to talk to these vendors was difficult at best around the conference sessions, on the exhibit floor, or in the restuarants/cafe’s surrounding Moscone Conference Center. Although once you got offsite facilities were much more accommodating.

I would have to say another lowlight were all the late night parties that occurred – not that I didn’t partake in my fair share of partying. There were rumors of one incident where a conference goer was running around a hotel hall with only undergarments on blowing kisses to any female within sight. Some people shouldn’t be allowed to leave home.

The only other real negative in a pretty flawless show was the lines of people waiting to get into the technical sessions. They were pretty orderly but I have not seen anything like this amount of interest before in technical presentations. Perhaps, I have just been going to the wrong conferences. In any event, I suspect VMworld will need to change venues soon as their technical sessions seem to be outgrowing their session rooms although the exhibit floor could have used a few more exhibitors. Too bad, I loved San Francisco and Moscone Center was so easy to get to…

—-

But all in all a great conference, learned lot’s of new stuff, talked with many old friends, and met many new ones. I look forward to next year.

Anything I missed?

Enterprise data storage defined and why 3PAR?

More SNW hall servers and storage
More SNW hall servers and storage

Recent press reports about a bidding war for 3PAR bring into focus the expanding need for enterprise class data storage subsystems.  What exactly is enterprise storage?

Defining enterprise storage is frought with problems but I will take a shot.  Enterprise class data storage has:

  • Enhanced reliability, high availability and serviceability – meaning it hardly ever fails, it keeps operating (on redundant components) when it does fail, and repairing the storage when the rare failure occurs can be accomplished without disrupting ongoing storage services
  • Extreme data integrity – goes beyond just RAID storage, meaning that these systems lose data very infrequently, provide the latest data written to a location when read and will tell you when data cannot be accessed.
  • Automated I/O performance – meaning sophisticated caching algorithms that try to keep ahead of sequential I/O streams, buffer actively read data, and buffer write data in non-volatile cache before destaging to disk or other media.
  • Multiple types of storage – meaning the system supports SATA, SAS and/or FC disk drives and SSDs or Flash storage
  • PBs of storage – meaning behind one enterprise class storage (sub-)system one can support over 1PB of storage
  • Sophisticated functionality – meaning the system supports multiple forms of offsite replication, thin provisioning, storage tiering, point-in-time copies, data cloning, administration GUIs/CLIs, etc.
  • Compatibility with all enterprise O/Ss – meaning the storage has been tested and is on hardware compatibility lists for every major operating system in use by the enterprise today.

As for storage protocol, it seems best to leave this off the list.  I wanted to just add block storage, but enterprises today probably have as much if not more external file storage (CIFS or NFS) as they have block storage (FC or iSCSI).  And the proportion in file systems seems to be growing (see IDC report referenced below).

In addition, while I don’t like the non-determinism of iSCSI or file access protocols, this doesn’t seem to stop such storage from putting up pretty impressive performance numbers (see our performance dispatches).  Anything that can crack 100K I/O or file operations per second probably deserves to call themselves enterprise storage as long as they meet the other requirements.  So, maybe I should add high-performance storage to the list above.

Why the sudden interest in enterprise storage?

Enterprise storage has been around arguably since the 2nd half of last century (for mainframe systems) but lately has become even more interesting as applications deploy to the cloud and server virtualization (from VMware, Microsoft Hyper-V and others) takes over the data center.

Cloud storage and cloud computing services are lowering the entry points for storage and processing, enabling application deployments which were heretofore unaffordable.  These new cloud applications consume storage at increasing rates and don’t seem to be slowing down any time soon.  Arguably, some cloud storage is not enterprise storage but as service levels go up for these applications, providers must ultimately turn to enterprise storage.

In addition, server virtualization transforms the enterprise data center from a single application per server to easily 5 or more applications per physical server.  This trend is raising server utilization, driving more I/O, and requiring higher capacity.  Such “multi-application” storage almost always requires high availability, reliability and performance to work well, generating even more demand for enterprise data storage systems.

Despite all the demand, world wide external storage revenues dropped 12% last year according to IDC.  Now the economy had a lot to do with this decline but another factor reducing external storage revenue is the ongoing drop in the price of storage on a $/GB basis.  To this point, that same IDC report stated that external storage capacity increased 33% last year.

Why Dell & HP wants 3PAR storage?

Margins on enterprise storage are good, some would say very good.  While raw disk storage can be had at under $0.50/GB, enterprise class storage is often 10 or more times that price.  Now that has to cover redundant hardware, software/firmware engineering and other characteristics, but this still leaves pretty good margins.

In my mind, Dell would see enterprise storage as a natural extension of their current enterprise server business.  They already sell and support these customers, including enterprise class storage just adds another product to the mix.  Developing enterprise storage from scratch is probably a 4-7 year journey with the right people, buying 3PAR puts them in the market today with a competitive product.

HP is already in the enterprise storage market today, with their XP and EVA storage subsystems.  However, having their own 3PAR enterprise class storage may get them better margins than their current XP storage OEMed from HDS.  But I think Chuck Hollis’s post on HP’s counter bid for 3PAR may have revealed another side to this discussion – sometime M&A is as much about constraining your competition as it is about adding new capabilities to a company.

——

What do you think?

PC-as-a-Service (PCaaS) using VDI

IBM PC Computer by Mess of Pottage (cc) (from Flickr)
IBM PC Computer by Mess of Pottage (cc) (from Flickr)

Last year at VMworld, VMware was saying that 2010 was year for VDI (virtual desktop infrastructure), last week NetApp said that most large NY banks they talked with were looking at implementing VDI and prior to that, HP StorageWorks announced a new VDI reference platform that could support ~1600 VDI images.  It seems that VDI is gaining some serious interest.

While VDI works well for large organizations, there doesn’t seem to be any similar solution for consumers. The typical consumer today usually runs downlevel OS’s, anti-virus, office applications, etc.  and have no time, nor inclination to update such software.  These consumers would be considerably better served with something like PCaaS, if such a thing existed.

PCaaS

Essentially PCaaS would be a VDI-like service offering, using standard VDI tools or something similar with a lightweight kernel, use of local attached resources (printers, usb sticks, scanners, etc.) but running applications that were hosted elsewhere.  PCaaS could provide all the latest O/S and applications and provide enterprise class reliability, support and backup/restore services.

Broadband

One potential problem with PCaaS is the need for reliable broadband to the home. Just like other cloud services, without broadband, none of this will work.

Possibly this could be circumvented if a PCaaS viewer browser application were available (like VMware’s Viewer). With this in place, PCaaS could be supplied over any internet enabled location supporting browser access.   Such a browser based service may not support the same rich menu of local resources as a normal PCaaS client, but it would probably suffice when needed. The other nice thing about a viewer is that smart phones, iPads and other always-on web-enabled devices supporting standard browsers could provide PCaaS services from anywhere mobile data or WI-FI were available.

PCaaS business model

As for a businesses that could bring PC-as-a-Service to life, I see many potential providers:

  • Any current PC hardware vendor/supplier may want to supply PCaaS as it may defer/reduce hardware purchases or rather move such activity from the consumer to companies.
  • Many SMB hosting providers could easily offer such a service.
  • Many local IT support services could deliver better and potentially less expensive services to their customers by offering PCaaS.
  • Any web hosting company would have the networking, server infrastructure and technical know-how to easily provide PCaaS.

This list ignores any new entrants that would see this as a significant opportunity.

Google, Microsoft and others seem to be taking small steps to do this in a piecemeal fashion, with cloud enabled office/email applications. However, in my view what the consumer really wants is a complete PC, not just some select group of office applications.

As described above, PCaaS would bring enterprise level IT desktop services to the consumer marketplace. Any substantive business in PCaaS would free up untold numbers of technically astute individuals providing un-paid, on-call support to millions, perhaps billions of technically challenged consumers.

Now if someone would just come out with Mac-as-a-Service, I could retire from supporting my family’s Apple desktops & laptops…

VMworld and long distance Vmotion

Moving a VM from one data center to another

In all the blog posts/tweets about VMworld this week I didn’t see much about long distance Vmotion. At Cisco’s booth there was a presentation on how they partnered with VMware and to perform Vmotion over 200 (simulated) miles away.

I can’t recall when I first heard about this capability but for many of us this we heard about this before. However, what was new was that Cisco wasn’t the only one talking about it. I met with a company called NetEx whose product HyperIP was being used to performe long distance Vmotion at over 2000 miles apart . And had at least three sites actually running their systems doing this. Now I am sure you won’t find NetEx on VMware’s long HCL list but what they have managed to do is impressive.

As I understand it, they have an optimized appliance (also available as a virtual [VM] appliance) that terminates the TCP session (used by Vmotion) at the primary site and then transfers the data payload using their own UDP protocol over to the target appliance which re-constitutes (?) the TCP session and sends it back up the stack as if everything is local. According to the NetEx CEO Craig Gust, their product typically offers a data payload of around ~90% compared to standard TCP/IP of around 30%, which automatically gives them a 3X advantage (although he claimed a 6X speed or distance advantage, I can’t seem to follow the logic).

How all this works with vCenter, DRS and HA I can only fathom but my guess is that everything this long distance Vmotion is actually does appears to VMware as a local Vmotion. This way DRS and/or HA can control it all. How the networking is set up to support this is beyond me.

Nevertheless, all of this proves that it’s not just one highend networking company coming away with a proof of concept anymore, at least two companies exist, one of which have customers doing it today.

The Storage problem

In any event, accessing the storage at the remote site is another problem. It’s one thing to transfer server memory and state information over 10-1000 miles, it’s quite another to transfer TBs of data storage over the same distance. The Cisco team suggested some alternatives to handle the storage side of long distance Vmotion:

  • Let the storage stay in the original location. This would be supported by having the VM in the remote site access the storage across a network
  • Move the storage via long distance Storage Vmotion. The problem with this is that transferring TB of data takes (even at 90% data payload for 800 Mb/s) would take hours. And 800Mb/s networking isn’t cheap.
  • Replicate the storage via active-passive replication. Here the storage subsystem(s) concurrently replicate the data from the primary site to the secondary site
  • Replicate the storage via active-active replication where both the primary and secondary site replicate data to one another and any write to either location is replicated to the other

Now I have to admit the active-active replication where the same LUN or file system can be be being replicated in both directions and updated at both locations simultaneously seems to me unobtainium, I can be convinced otherwise. Nevertheless, the other approaches exist today and effectively deal with the issue, albeit with commensurate increases in expense.

The Networking problem

So now that we have the storage problem solved, what about the networking problem. When a VM is Vmotioned to another ESX server it retains its IP addressing so as to retain all it’s current network connections. Cisco has some techniques here where they can seem to extend the VLAN (or subnet) from the primary site to the secondary site and leave the VM with the same network IP address as at the primary site. Cisco has a couple of different ways to extend the VLAN optimized for HA, load ballancing, scaleability or protocol isolation and broadcast avoidance. (all of which is described further in their white paper on the subject). Cisco did mention that their Extending VLAN technology currently would not support distances greater than 500 miles apart.

Presumably NetEx’s product solves all this by leaving the IP addresses/TCP port at the primary site and just transferring the data to the secondary site. In any event multiple solutions to the networking problem exist as well.

Now, that long distance Vmotion can be accomplished is it a DR tool, a mobility tool, a load ballancing tool, or all of the above. That will need to wait for another post.

Why virtualize now?

HP servers at School of Electrical Engineering, University of Belgrade
HP servers by lilit
I suppose it’s obvious to most analyst why server virtualization is such a hot topic these days. Most IT shops purchase servers today that are way overpowered and can easily execute multiple applications. Today’s overpowered servers are wasted running single applications and would easily run multiple applications if only an operating system could run them together without interference.

Enter virtualization, with virtualization hypervisors can run multiple applications concurrently and sometimes simultaneously on the same hardware server without compromising application execution integrity. Multiple virtual machine applications execute on a single server under a hypervisor that isolates the applications from one another. Thus, they all execute together on the same hardware without impacting each other.

But why doesn’t the O/S do this?

Most computer purists would say why not just run the multiple applications under the same operating system. But operating systems that run servers nowadays weren’t designed to run multiple applications together and as such, also weren’t designed to isolate them properly.

Virtualization hypervisors have had a clean slate to execute and isolate multiple application. Thus, virtualization is taking over the data center floor. As new servers come in, old servers are retired and the applications that used to run on them are consolidated on fewer and fewer physical servers.

Why now?

Current hardware trends dictate that each new generation of server has more processing power and oftentimes, more processing elements than previous generations. Today’s applications are getting more sophisticated but even with added sophistication, they do not come close to taking advantage of all the processing power now available. Hence, virtualization wins.

What seems to be happening nowadays is that while data centers started out consolidating tier 3 applications through virtualization, now they are starting to consolidate tier 2 applications and tier 1 apps are not far down this path. But, tier 2 and 1 applications require more dedicated services, more processing power, more deterministic execution times and thus, require more sophisticated virtualization hypervisors.

As such, VMware and others are responding by providing more hypervisor sophistication, e.g., more ways to dedicate and split up processing, networking and storage available to the physical server for virtual machine or application dedicated use. Thus preparing themselves for a point in the not to distant future when tier 1 applications run with all the comforts of a dedicated server environment but actually execute with other VMs in a single physical server.

VMware vSphere

We can see the start of this trend with the latest offering from VMware, vSphere. This product now supports more processing hardware, more networking options and stronger storage support. vSphere also can dedicate more processing elements to virtual machines. Such new features make it easier to support tier 2 today and tier 1 applications sometime in future.

Quantum OEMs esXpress VM Backup SW

Quantum announced today that they are OEMing esXpress software (from PHD Virtual) to better support VMware VM backups (see press release) . This software schedules VMware snapshots of VMs and can then transfer the VM snapshot (backup) data directly to a Quantum DXI storage device.

One free “Professional” esXpress license will ship with each DXI appliance which allows for up to 4-esXpress virtual backup appliance (VBA) virtual machines to run in a single VMware physical server. An “Enterprise” license can be purchased for $1850 which allows for up to 16-esXpress VBA virtual machines to run on a single VMware physical server. More Professional licenses can be purchased for $950 each. The free Professional license also comes with free installation services from Quantum.

Additional esXpress VBAs can be used to support more backup data throughput from a single physical server. The VBA backup activity is a scheduled process and as such, when completed the VBA can be “powered” down to save VMware server resources. Also as VBAs are just VMs they fully support VMware Vmotion, DRS, and HA capabilities that are available from VMware. However using any of these facilities to move a VBA to another physical server may require additional licensing.

The esXpress software eliminates the need for a separate VCB (VMware Consolidated Backup) proxy server and provides a direct interface to support Quantum DXI deduplicated storage for VM backups. This should simplify backup processing for VMware VMs using DXI archive storage.

Quantum also announced today a new key manager, the Scalar Key Manager for Quantum LTO tape encryption which has an integrated GUI with Quantum’s tape automation products. This allows a tape automation manager a single user interface to support tape automation and tape security/encryption. A single point of management should simplify the use of Quantum LTO tape encryption.

EMCWorld News

At EMC World this past week, all the news was about VMware, Cisco and EMC and how they are hooking up to address the needs of the new data center user.

VMware introduced vSphere which was their latest release of their software which contained significant tweaks to improve storage performance.

Cisco was not announcing much at the show other than to say that they support vSphere with their Nexus 1000v software switch.

EMC discussed their latest V-Max hardware and PowerPath/VMware, an up and coming release of NaviSphere for VM and some other plugins that allows the VMware admin to see EMC storage views (EMC viewer).

On another note, it seems that EMC is selling out all their SSDs they can and they recently introduced their next generation SSD drive with 400GB of storage