Veeam’s upcoming V8 virtues

[Not] Vamoosing VMworld

We were at Storage Field Day 5 (SFD5, see the videos here) last month and had a briefing on Veeam’s upcoming V8 release.

They also told us (news to me) that they are leaving VMworld[I sit corrected, I have been informed after this went to press that Veeam is not leaving VMworld2014, and never said anything about it at the session – My mistake and I take full responsibility, sorry for any confusion] (sigh, now who’s going to have THE after conference, KILLER PARTY at VMworld) and moving to [but they did say they are definitely starting up] their own VeeamON conference at The Cosmopolitan in Las Vegas on October 6,7 & 8 this year. If their VMworld parties are any indication, the conference in the Cosmo should be a fun and rewarding time for all. Pre-registration is open and they have a call out for papers.

Doug Hazelman (@VMDoug), Rick Vanover (@RickVanover) and Luca Dell’Oca (@dellock6) all presented although Luca’s session was under strict NDA to be revealed later. I think sometime later this summer.

Doug mentioned that after 6 years, Veeam now has over 100,000 customers world wide.  One of their more popular, early innovations was the ability to run a VM directly off of a backup and sometime over the past couple of years they have moved from a VMware only backup & replication solution to also supporting Microsoft Hyper-V (more news to me).

V8’s virtues

Veeam V8 will add some interesting capabilities to the Veeam product solutions:

  • (VMware only) Built-in backups from storage snapshots – (Enterprise Plus edition only) Backup from VMware snapshots can sometimes impact app performance, especially when it comes time to commit changes. But with V7, Veeam now offers backup utilizing VMware’s Change Block Tracking (CBT)and taking backups from storage snapshots directly for HP 3PAR StoreServ, HP (Lefthand) StoreVirtual/StoreVirtual VSA and in soon to be available V8, NetApp FAS (Data ONTAP 8.1 or above, 7- or cluster-mode, clones too) storage systems. First Veeam does its application level processing (under Windows Server does VSS operations), after that completes tells VMware to take (a VMware) snapshot, when that completes they tell the storage to take a (storage) snapshot, when that completes they release the VMware snapshot. What all this does is allows them to utilize VMware CBT as well as storage snapshots which makes it up to 20 times faster than normal VMware snapshot backups. This way they can backup directly from the storage snapshot using the Veeam proxy. Also because the VMware snapshot is so short lived there is little overhead for committing any changes.  Also there is no need to use a proxy ESX server to do this, i.e., promote the VMware snapshot to a LUN, add it to an ESX, resignature, add the VM, and do all the backups, which, of course destroys CBT. This works for FC, iSCSI and NFS data stores. With NetApp storage you can also take the (VSS) application consistent snapshot and copy it to SnapVault.
  • Veeam Explorer (recovery) for storage snapshots – (Free backup edition) Recovery from (HP in V7 & NetApp in V8) storage snapshots is yet another feature and provides item (e.g., emails, contacts, email folders for Exchange), granular (VM level or file level) or full (volume) recovery from storage based snapshots, regardless of how those storage snapshots were created.
  • Veeam Explorer for SQL Server (V8 only) – (unsure what license is required) Similar to the Explorer for snapshots discussed above, this would allow a Veeam admin to do item level recovery for an SQL database. This also includes recovery from Veeam Backup repositories as well as storage snapshots. But this means that you could restore a ROW of an SQL table, an SQL TABLE as well as a whole SQL database. Now DBAs always had these sorts of abilities which required using Log services. But allowing a Veeam admin to do these sorts of activities seems like putting a gun in the hands of a child (or maybe a bazooka in the hands of an untrained civilian).
  • Veeam Explorer for Active Directory (V8 only) – (unsure what license is required) You’ve seen whats’ available above and just consider these same capabilities only applied to active directory. This means you can restore a password hash, user, group or organizational unit (OU). I don’t know about you but this seems more akin to a howitzer in the hands of a civilian.

They showed an example of competitive situation where running V8 (in beta?) with NetApp backups using snapshots versus some unnamed competition. They were able to complete a full backup in 1/4 the time of their competition (2hrs. vs. 8hrs.) and completed incremental backups in 35min. vs. 2hrs. for the competition.

“Thar be dragons there …”

Ok, maybe I am a little more paranoid than the average IT guy/gal. But in my (old world, greybeards) view, SQL databases belong in the realm of DBAs and Active Directory databases belong to domain controller admins. Messing around with production versions of SQL DBs or AD DBs seems hazardous to a data centers health. We’re not just talking files anymore here guys.

In Veeam’s defense, these new Explorer recovery tools are only probably going to be used to do something that needs to be done right away, to get things back operating again, and would not be used unless there’s a real need/emergency to do so. Otherwise let the DBA and security admins do it with their log recovery tools.  And another thing, they have had similar capabilities for Exchange emails, folders, contacts, etc. and no ones shot their foot off yet so why the concern.

Nonetheless, I feel strongly that these tools ought to be placed under lock and key and the key put in a safe with the combination under a glass case labeled IN CASE OF EMERGENCY, BREAK GLASS.

Comments.

Big data – part 3

Linkedin maps data visualization by luc legay (cc) (from Flickr)
Linkedin maps data visualization by luc legay (cc) (from Flickr)

I have renamed this series to “Big data” because it’s no longer just about Hadoop (see Hadoop – part 1 & Hadoop – part 2 posts).

To try to partition this space just a bit, there is unstructured data analysis and structured data analysis. Hadoop is used to analyze un-structured data (although Hadoop is used to parse and structure the data).

On the other hand, for structured data there are a number of other options currently available. Namely:

  • EMC Greenplum – a relational database that is available in a software only as well as now as a hardware appliance. Greenplum supports both row or column oriented data structuring and has support for policy based data placement across multiple storage tiers. There is a packaged solution that consists of Greenplum software and a Hadoop distribution running on a GreenPlum appliance.
  • HP Vertica – a column oriented, relational database that is available currently in a software only distribution. Vertica supports aggressive data compression and provides high throughput query performance. They were early supporters of Hadoop integration providing Hadoop MapReduce and Pig API connectors to provide Hadoop access to data in Vertica databases and job scheduling integration.
  • IBM Netezza – a relational database system that is based on proprietary hardware analysis engine configured in a blade system. Netezza is the second oldest solution on this list (see Teradata for the oldest). Since the acquisition by IBM, Netezza now provides their highest performing solution on IBM blade hardware but all of their systems depend on purpose built, FPGA chips designed to perform high speed queries across relational data. Netezza has a number of partners and/or homegrown solutions that provide specialized analysis for specific verticals such as retail, telcom, finserv, and others. Also, Netezza provides tight integration with various Oracle functionality but there doesn’t appear to be much direct integration with Hadoop on thier website.
  • ParAccel – a column based, relational database that is available in a software only solution. ParAccel offers a number of storage deployment options including an all in-memory database, DAS database or SSD database. In addition, ParAccel offers a Blended Scan approach providing a two tier database structure with DAS and SAN storage. There appears to be some integration with Hadoop indicating that data stored in HDFS and structured by MapReduce can be loaded and analyzed by ParAccel.
  • Teradata – a relational databases that is based on a proprietary purpose built appliance hardware. Teradata recently came out with an all SSD, solution which provides very high performance for database queries. The company was started in 1979 and has been very successful in retail, telcom and finserv verticals and offer a number of special purpose applications supporting data analysis for these and other verticals. There appears to be some integration with Hadoop but it’s not prominent on their website.

Probably missing a few other solutions but these appear to be the main ones at the moment.

In any case both Hadoop and most of it’s software-only, structured competition are based on a massively parrallelized/share nothing set of linux servers. The two hardware based solutions listed above (Teradata and Netezza) also operate in a massive parallel processing mode to load and analyze data. Such solutions provide scale-out performance at a reasonable cost to support very large databases (PB of data).

Now that EMC owns Greenplum and HP owns Vertica, we are likely to see more appliance based packaging options for both of these offerings. EMC has taken the lead here and have already announced Greenplum specific appliance packages.

—-

One lingering question about these solutions is why don’t customers use current traditional database systems (Oracle, DB2, Postgres, MySQL) to do this analysis. The answer seems to lie in the fact that these traditional solutions are not massively parallelized. Thus, doing this analysis on TB or PB of data would take a too long. Moreover, the cost to support data analysis with traditional database solutions over PB of data would be prohibitive. For these reasons and the fact that compute power has become so cheap nowadays, structured data analytics for large databases has migrated to these special purpose, massively parallelized solutions.

Comments?