EMC's Data Domain ROI

I am trying to put EMC’s price for Data Domain (DDup) into perspective but am having difficulty. According to InfoWorld article on EMC acquisitions ’03-’06 and some other research this $2.2B$2.4B is more money (not inflation adjusted) than anything in EMC’s previous acquisition history. The only thing that comes close was the RSA acquisition for $2.1B in ’06.

VMware only cost EMC $625M and has been by all accounts, very successful being spun out of EMC in an IPO and currently shows a market cap of ~$10.2B. Documentum cost $1.7B and Legato only cost $1.3B both of which are still within EMC.

Something has happened here, in a recession valuations are supposed to be more realistic not less realistic. At Data Domain’s TTM revenues ($300.5M) this will take over 7 years to breakeven on a straightline view. If one considers WACC (weighted average cost of capital) it looks much worse. Looking at DDup’s earnings makes it look even worse.

Other than fire up EMC’s marketing and sales engine to sell more DDup products, what else can EMC do to gain a better return on it’s DDup acquisition? (not in order)

  • Move EMC’s current Disk Libraries to DDup technology and let go of Quantum-FalconStor OEM agreements and/or abandon the current DL product line and substitute Ddup
  • Incorporate DDup technology into Legato Networker for target deduplication applications
  • Incorporate DDup technology into Mozy and Atmos
  • Incorporate DDup technology into Documentum
  • Incorporate DDup technology into Centera and Celerra

Can EMC selling DDup products and doing all this to better its technology double the revenue earnings and savings derived from DDup products and technology – maybe. But the incorporation of DDup into Centera and Celerra could just as easily decrease EMC revenues profits from the storage capacity lost depending on the relative price differences.

I figure the Disk Library, Legato, and Mozy integrations would be first on anyone’s list. Atmos next, and Celerra-Centera last.

As for what to add to DDup’s product line. Possibly additions are around the top end and the bottom end. DDup has been moving up market of late and integration with EMC DL might just help take it there. Down market, there is a potential market of small businesses that might want to use DDup technology at the right price point.

Not sure if the money paid for Ddup still makes sense but at least it begins to look better…

BlueArc introduces Mercury

Tomorrow, BlueArc will open up a new front in their battle with the rest of the NAS vendors by introducing the Mercury 50 NAS head. This product is slated to address the more mid-range enterprise market that historically shunned the relatively higher priced Titan series.

Mercury 50 is only the first product in this series and other products to be released in the future will help fill out the top end of this series. Priced similar to the NetApp 3140 this product has all the support of standard BlueArc file system while only limiting the Max storage capacity to 1PB. Its NFS throughput is a little better than half the current Titan 3100.

Mercury 50 will eventually be offered by BlueArc’s OEM partner HDS. However, immediately the Mercury 50 will be sold by the BlueArc’s direct sales force as well as many new channel partners that BlueArc has acquired over this past year.

This marks a departure for BlueArc into the more mainstream enterprise storage space. Historically, BlueArc has been successful in the high performance market but the real volumes and commensurate revenue are in the standard enterprise space. The problem in the past has been the high price of the BlueArc Titan systems but now with Mercury this should no longer be an issue.

That being said, the competition is much more intense as you move down market. EMC and NetApp will not stand still while their market share is eroded. And both of these company’s have the wherewithal to compete on performance, pricing and features.

Exciting times ahead for the NAS users out there.

DataDirect Networks WOS cloud storage

DataDirect Networks (DDN) announced this week a new product offering private cloud services. Apparently the new Web Object Scaler (WOS) is a storage appliance that can be clustered together across multiple sites and offers a single global file name space across all the sites. Also the WOS cloud supports policy file replication and distribution across sites for redundancy and/or load ballancing purposes.

DDN’s press release said a WOS cloud can service up to 1 million random file reads per second. They did not indicate the number of nodes required to sustain this level of performance and they didn’t identify the protocol that was used to do this. The press release implied low-latency file access but didn’t define what they meant here. 1M file reads/sec doesn’t necessarily mean they are all read quickly. Also, there appears to b more work for a file write than a file read and there is no statement on file ingest rate provided.

There are many systems out there touting a global name space. However not many say thier global name space spans across multiple sites. I suppose cloud storage would need to support such a facility to keep file names straight across sites. Nonetheless, such name space services would imply more overhead during file creation/deletion to keep everything straight and meta data duplication/replication/redundancy to support this.

Many questions on how this all works together with NFS or CIFS but it’s entirely possible that WOS doesn’t support either file access protocol and just depends on HTML get and post to access files or similar web services. Moreover, assuming WOS supports NFS or CIFS protocols, I often wonder why these sorts of announcements aren’t paired with a SPECsfs(r) 2008 benchmark report which could validate any performance claim at least at the NFS or CIFS protocol levels.

I talked to one media person a couple of weeks ago and they said cloud storage is getting boring. There are a lot of projects (e.g., Atmos from EMC) out there targeting future cloud storage, I hope for their sake boring doesn’t mean no market exists for cloud storage.

HDS Dynamic Provisioning for AMS

HDS announced support today for their thin provisioning (called Dynamic Provisioning) feature to be available in their mid-range storage subsystem family the AMS. Expanding the subsystems that support Thin provisioning can only help the customer in the long run.

It’s not clear whether you can add dynamic provisioning to an already in place AMS subsystem or if it’s only available on a fresh installation of an AMS subsystem. Also no pricing was announced for this feature. In the past, HDS charged double the price of a GB of storage when it was in a thinly provisioned pool.

As you may recall, thin provisioning is a little like a room with a bunch of inflatable castles inside. Each castle starts with it’s initial inflation amount. As demand dictates, each castle can independently inflate to whatever level is needed to support the current workload up to that castles limit and the overall limit imposed by the room the castles inhabit. In this analogy, the castles are LUN storage volumes, the room the castles are located in, is the physical storage pool for the thinly provisioned volumes, and the air inside the castles is the physical disk space consumed by the thinly provisioned volumes.

In contrast, hard provisioning is like building permanent castles (LUNS) in stone, any change to the size of a structure would require major renovation and/or possible destruction of the original castle (deletion of the LUN).

When HDS first came out with dynamic provisioning it was only available for USP-V internal storage, later they released the functionality for USP-V external storage. This announcement seems to complete the roll out to all their SAN storage subsystems.

HDS also announced today a new service called the Storage Reclamation Service that helps
1) Assess whether thin provisioning will work well in your environment
2) Provide tools and support to identify candidate LUNs for thin provisioning, and
3) Configure new thinly provisioned LUNs and migrate your data over to the thinly provisioned storage.

Other products that support SAN storage thin provisioning include 3PAR, Compellent, EMC DMX, IBM SVC, NetApp and PillarData.

Norton Online Backup ships with HP computers

Symantec announced today that Norton Online Backup software will be shipping with HP PCs and Laptops. Norton Online Backup is a cloud storage solution which can be used to backup your data on your PC.

Norton Online currently has about 32PB of consumer data and is growing by about 5PB/Qtr and is currently number one in online backup market. Also Norton online has about 8M users today growing 100% each year. With the HP announcement today all of these metrics will just increase even faster.

Consumers create over 70% of the worlds digital data with a 60% CAGR. Roughly about 2% of consumers use online backup services and ~25% of never backup at all. Norton Online Backup, EMC’s Mozy, Carbonite and others are attempting to entice these backup shy users to start backing up their data online and forgo onsite headaches of doing it yourself.

Apparently with Norton Online one can back up up to 5 machines and they can be located anywhere. So if you wanted to backup your kid’s pc at college and your parent’s pc at their retirement village you could do this with one Norton online license (as long as the total machine count < =5). Once backed up the data can be restored to any machine and takes just a few clicks. Backing up your pc is easy to setup and once done can be forgotten. Then whenever you are on the internet and the machine is not busy, the data just trickles out to the Norton Online backup service. The Norton Online Backup service is renewed yearly and cost is based on storage quantity backed up. How Symantec stores and records 32PB of user backup data is non-trivial but I am told it is all done using commodity hardware and commodity disk drives with nary a SAN in sight. They have multiple data centers, professionally managed, supporting Symantec developed/acquired cloud storage services. Apparently, Norton Online Backup is an outgrowth of Symantec's SwapDrive acquisition from last year. Symantec appears to be the leader in cloud storage applications and this would seem to be just the start of the services that Symantec will deploy via the cloud. Now if they only had something for the Mac... Technorati Profile

EMC Better At Acquisitions?

I was talking with an EMCer the other day about the Data Domain deal and he said that EMC does very well with acquisitions. Just about every EMC product line other than Symmetrix (and possibly Celerra, Invista, PowerPath and maybe others) came from an acquisition in EMC’s past.

The list goes something like this Clariion from Data General, Centerra from FilePool, Control Center from BMC, Networker from Legato, RainFinity, Avamar, Documentum, RSA all from companies of the same name. There are other examples as well but these should suffice for now. One almost starts to forget about all these separate companies that existed prior to EMC’s acquisitions. Over time EMC manages to succeed in advancing and integrating the various technologies and products into their portfolio.

On the other extreme is Sun. They have almost a perfect record of acquiring companies and burying the technology away. Often the technology does emerge after a gestation period in another Sun product somewhere else but just as often it just fades away never to be seen again.

Today’s companies have to do acquisitions well. They can no longer afford the luxury to acquire companies and then see their investment die away. Those days are long gone

What makes EMC so successful while others can do so poorly? One thing I have learned is that EMC leaves a new acquisition pretty much alone for 12 months or so. During that time presumably they are assessing the current management team for EMC cultural fit and determining the best way to sell, advance and integrate the acquired technology into the rest of EMC’s product and services portfolio.

The other thing I have noticed is that EMC’s most recentr acquisitions have retained at least portions of their original brand names. Networker, RainFinity, Documentum, and RSA are examples here.

I don’t know what it is about retaining a brand name but 1) it makes it harder to let it fade away because it’s so visible, 2) employees who have a personal interest in the brand fight to keep it alive and advancing, and 3) customer base and loyalty is retained better.

Just pieces of the puzzle but no doubt there is more to this than is visible externally.

How well NetApp will do as an Acquirer is another question. I know they have acquired Spinnaker, Alacritus, Decru, Topio, and Onaro over the past five years. Most of these products are still being sold. Rumors point to Spinnaker technology being merged into NetApp’s mainline product soon. All in all, I would have to say that although NetApp has retained the product names for most of these products Onaro’s SANScreen, Decru’s DataFort and others, they haven’t necessarily done a good job keeping the brandnames alive.

What NetApp will do with Data Domain however, is another matter entirely. First, the price being paid is much higher than any previous acquisitions. Second, the market share that Data Domain currently enjoys is much larger than any previous acquisition. Finally, it’s crucial to NetApp’s future revenue growth to do this one right. Given all that, I truly believe they will do a much better job with retaining Data Domain’s brand and product names, thereby keeping the product alive and well for the foreseeable future.

Rgds,
Ray

Data Domain bidding war

It’s unclear to me what EMC would want with Data Domain (DD) other than to lockup deduplication technology across the enterprise. EMC has Avamar for Source dedupe, has DL for target dedupe, has Celerra Dedupe and the only one’s missing are V-Max, Symm & Clariion dedupe.

My guess is that EMC sees Data Domain’s market share as the primary target. It doesn’t take a lot of imagination to figure that once Data Domain is a part of EMC, EMC’s Disk Library (DL) offerings will move over to DD technology. Which probably leaves FalconStor/Quantum technology used in DL today as outsiders.

EMC’s $100M loan to Quantum last month probably was just insurance to keep a business partner afloat until something better came along or they could make it on their own. The DD deal would leave Quantum parntership supporting EMC with just Quantum’s tape offerings.

Quantum deduplication technology doesn’t have nearly the market share that DD has in the enterprise but they have won a number of OEM deals not the least of which is EMC and they were looking to expand. But if EMC buys DD, this OEM agreement will end soon.

I wonder if DD is worth $1.8B in cash what could Sepaton be worth. They seem to be the only pure play dedupe appliance left standing out there.

Not sure whether NetApp will up their bid but they always seem to enjoy competing with EMC. Also unclear how much of this bid is EMC wanting DD or EMC just wanting to hurt NetApp, either way DD stockholders win out in the end.

EMCWorld News

At EMC World this past week, all the news was about VMware, Cisco and EMC and how they are hooking up to address the needs of the new data center user.

VMware introduced vSphere which was their latest release of their software which contained significant tweaks to improve storage performance.

Cisco was not announcing much at the show other than to say that they support vSphere with their Nexus 1000v software switch.

EMC discussed their latest V-Max hardware and PowerPath/VMware, an up and coming release of NaviSphere for VM and some other plugins that allows the VMware admin to see EMC storage views (EMC viewer).

On another note, it seems that EMC is selling out all their SSDs they can and they recently introduced their next generation SSD drive with 400GB of storage