Primary data’s path to better data storage presented at SFD8

IMG_5606rz A couple of weeks ago we met with Primary Data, Lance Smith, CEO, David Flynn, CTO and Kaycee Lai, SVP Product & Sales who were presenting at Storage Field Day 8 (SFD8, videos of their sessions available here). Primary Data has just emerged out of stealth late last year and has ~$60M in funding. Also they have Steve Wozniak (of Apple fame) as Chief Scientist, but he wasn’t at the SFD8 session 🙁

Primary Data seems out to change the world. At first I thought this was just another form of storage virtualization but they are laser focused on data virtualization or what they call data mobility. It differs from pure storage virtualization by being outside the data path.  (I have written about data virtualization before as well as the data hypervisor a long time ago). Nowadays they seem to be using the tag line of data in motion.

Why move data?

David has a theory behind the proliferation of startup storage companies. The spectrum behind capacity and performance has gotten immense, over time, which has provided an opening for a number of companies to address these widening needs.

David believes that caching at the storage system or in the servers is an attempt to address this issue by “loaning” the data from the storage silo to the cache. This is trying to supply a lower cost $/IOP for the data. Similar considerations are apparent at the other side where customer’s use archive or backup services to take advantage of much cheaper $/GB storage.

However, given the difficulty of moving data around in present day storage environments, customer data has become essentially immobile. Primary Data is trying to bring about a data mobility revolution and allow data to move over this spectrum of performance and capacity of storage with ease. Doing so easily, will provide significant benefits as customers can more fully take advantage of the various levels of performance and capacity in their data center storage environments.

Primary Data architecture

IMG_5607Primary Data is providing data mobility by using their meta-data service called the DataSphere appliance and their client software running on host servers called the Data Portal. Their offering can be best explained in three layers:

  • Data virtualization layer – provides continuity of identity and continuity of access across multiple physical storage systems. That is the same data (identity continuity) can be accessed wherever it resides (access continuity) by server applications. Such access and identity must transcend access protocols and interfaces. The Data Portal client software intercepts the server data activity and does control plane activity using the DataSphere appliance and performs IO directly using the physical storage.
  • Objective based data management – supplies a data affinity service. That is data can have a temporary location relationship with physical storage depending on the current performance (R:W, IOPS, bandwidth, latency) and protection (durability, availability, disaster recoverability, security, copy-ability, version-ability) characteristics of the data. These data objectives are matched to the capabilities or service catalog of the storage infrastructure and data objectives can change over time
  • Analytics in the loop – detects the performance and other characteristics of the storage and data in real-time. That is by monitoring the storage IO activity Primary Data can determine the actual performance attribute of the storage. Similarly, by monitoring the applications IO characteristics over time the system can determine the performance objectives of its data. The system also takes advantage of SMI-S to define some of the other characteristics of the storage systems.

How does Primary Data work?

Primary Data has taken advantage of parallel NFS extensions (pNFS) in NFSv4 to externalize and separate the storage control plane from the IO data plane. This works well for native Linux where the main developer of the Linux file system stack is on their payroll.IMG_5608rz

In Windows they put a filter driver in front of SMB to split off the control from data IO plane. Something similar is done for VMware ESX servers to supply the control-data plane split but in this case there is a software defined Data Portal that goes along with the DataSphere Service client that can do it all within the same ESX server. Another alternative exists and that is to use the Data Portal appliance as a storage virtualization service but then the IO data path goes through the portal.

According to their datasheet they currently support data virtualization services for NetApp cDOT and 7-mode, EMC Isilon OneFS7.2, and Nexenta 4.x&5.0 but plan on more.

They are not quite GA yet, but are close.

Comments?

 

 

 

Why EMC is doing Project Lightening and Thunder

Picture of atmospheric lightening striking ground near a building at night
rayo 3 by El Garza (cc) (from Flickr)

Although technically Project Lightening and Thunder represent some interesting offshoots of EMC software, hardware and system prowess,  I wonder why they would decide to go after this particular market space.

There are plenty of alternative offerings in the PCIe NAND memory card space.  Moreover, the PCIe card caching functionality, while interesting is not that hard to replicate and such software capability is not a serious barrier of entry for HP, IBM, NetApp and many, many others.  And the margins cannot be that great.

So why get into this low margin business?

I can see a couple of reasons why EMC might want to do this.

  • Believing in the commoditization of storage performance.  I have had this debate with a number of analysts over the years but there remain many out there that firmly believe that storage performance will become a commodity sooner, rather than later.  By entering the PCIe NAND card IO buffer space, EMC can create a beachhead in this movement that helps them build market awareness, higher manufacturing volumes, and support expertise.  As such, when the inevitable happens and high margins for enterprise storage start to deteriorate, EMC will be able to capitalize on this hard won, operational effectiveness.
  • Moving up the IO stack.  From an applications IO request to the disk device that actually services it is a long journey with multiple places to make money.  Currently, EMC has a significant share of everything that happens after the fabric switch whether it is FC,  iSCSI, NFS or CIFS.  What they don’t have is a significant share in the switch infrastructure or anywhere on the other (host side) of that interface stack.  Yes they have Avamar, Networker, Documentum, and other software that help manage, secure and protect IO activity together with other significant investments in RSA and VMware.   But these represent adjacent market spaces rather than primary IO stack endeavors.  Lightening represents a hybrid software/hardware solution that moves EMC up the IO stack to inside the server.  As such, it represents yet another opportunity to profit from all the IO going on in the data center.
  • Making big data more effective.  The fact that Hadoop doesn’t really need or use high end storage has not been lost to most storage vendors.  With Lightening, EMC has a storage enhancement offering that can readily improve  Hadoop cluster processing.  Something like Lightening’s caching software could easily be tailored to enhance HDFS file access mode and thus, speed up cluster processing.  If Hadoop and big data are to be the next big consumer of storage, then speeding cluster processing will certainly help and profiting by doing this only makes sense.
  • Believing that SSDs will transform storage. To many of us the age of disks is waning.  SSDs, in some form or another, will be the underlying technology for the next age of storage.  The densities, performance and energy efficiency of current NAND based SSD technology are commendable but they will only get better over time.  The capabilities brought about by such technology will certainly transform the storage industry as we know it, if they haven’t already.  But where SSD technology actually emerges is still being played out in the market place.  Many believe that when industry transitions like this happen it’s best to be engaged everywhere change is likely to happen, hoping that at least some of them will succeed. Perhaps PCIe SSD cards may not take over all server IO activity but if it does, not being there or being late will certainly hurt a company’s chances to profit from it.

There may be more reasons I missed here but these seem to be the main ones.  Of the above, I think the last one, SSD rules the next transition is most important to EMC.

They have been successful in the past during other industry transitions.  If anything they have shown similar indications with their acquisitions by buying into transitions if they don’t own them, witness Data Domain, RSA, and VMware.  So I suspect the view in EMC is that doubling down on SSDs will enable them to ride out the next storm and be in a profitable place for the next change, whatever that might be.

And following lightening, Project Thunder

Similarly, Project Thunder seems to represent EMC doubling their bet yet again on the SSDs.  Just about every month I talk to another storage startup coming out in the market providing another new take on storage using every form of SSD imaginable.

However, Project Thunder as envisioned today is not storage, but rather some form of external shared memory.  I have heard this before, in the IBM mainframe space about 15-20 years ago.  At that time shared external memory was going to handle all mainframe IO processing and the only storage left was going to be bulk archive or migration storage – a big threat to the non-IBM mainframe storage vendors at the time.

One problem then was that the shared DRAM memory of the time was way more expensive than sophisticated disk storage and the price wasn’t coming down fast enough to counteract increased demand.  The other problem was making shared memory work with all the existing mainframe applications was not easy.  IBM at least had control over the OS, HW and most of the larger applications at the time.  Yet they still struggled to make it usable and effective, probably some lesson here for EMC.

Fast forward 20 years and NAND based SSDs are the right hardware technology to make  inexpensive shared memory happen.  In addition, the road map for NAND and other SSD technologies looks poised to continue the capacity increase and price reductions necessary to compete effectively with disk in the long run.

However, the challenges then and now seem as much to do with software that makes shared external memory universally effective as with the hardware technology to implement it.  Providing a new storage tier in Linux, Windows and/or VMware is easier said than done. Most recent successes have usually been offshoots of SCSI (iSCSI, FCoE, etc).  Nevertheless, if it was good for mainframes then, it certainly good for Linux, Windows and VMware today.

And that seems to be where Thunder is heading, I think.

Comments?

 

Comments?

Why Open-FCoE is important

FCoE Frame Format (from Wikipedia, http://en.wikipedia.org/wiki/File:Ff.jpg)
FCoE Frame Format (from Wikipedia, http://en.wikipedia.org/wiki/File:Ff.jpg)

I don’t know much about O/S drivers but I do know lots about storage interfaces. One thing that’s apparent from yesterday’s announcement from Intel is that Fibre Channel over Ethernet (FCoE) has taken another big leap forward.

Chad Sakac’s chart of FC vs. Ethernet target unit shipments (meaning, storage interface types, I think) clearly indicate a transition to ethernet is taking place in the storage industry today. Of course Ethernet targets can be used for NFS, CIFS, Object storage, iSCSI and FCoE so this doesn’t necessarily mean that FCoE is winning the game, just yet.

WikiBon did a great post on FCoE market dynamics as well.

The advantage of FC, and iSCSI for that matter, is that every server, every OS, and just about every storage vendor in the world supports them. Also there are plethera of economical, fabric switches available from multiple vendors that can support multi-port switching with high bandwidth. And there many support matrixes, identifying server-HBAs, O/S drivers for those HBA’s and compatible storage products to insure compatibility. So there is no real problem (other than wading thru the support matrixes) to implementing either one of these storage protocols.

Enter Open-FCoE, the upstart

What’s missing from 10GBE FCoE is perhaps a really cheap solution, one that was universally available, using commodity parts and could be had for next to nothing. The new Open-FCoE drivers together with the Intels x520 10GBE NIC has the potential to answer that need.

But what is it? Essentially Intel’s Open-FCoE is an O/S driver for Windows and Linux and a 10GBE NIC hardware from Intel. It’s unclear whether Intel’s Open-FCoE driver is a derivative of the Open-FCoe.org’s Linux driver or not but either driver works to perform some of the FCoE specialized functions in software rather than hardware as done by CNA cards available from other vendors. Using server processing MIPS rather than ASIC processing capabilities should make FCoE adoption in the long run, even cheaper.

What about performance?

The proof of this will be in benchmark results but it’s quite possible to be a non-issue. Especially, if there is not a lot of extra processing involved in a FCoE transaction. For example, if Open-FCoE only takes let’s say 2-5% of server MIPS and bandwidth to perform the added FCoE frame processing then this might be in the noise for most standalone servers and would showup only minimally in storage benchmarks (which always use big, standalone servers).

Yes, but what about virtualization?

However real world, virtualized servers is another matter. I believe that virtualized servers generally demand more intensive I/O activity anyway and as one creates 5-10 VMs, ESX server, it’s almost guaranteed to have 5-10X the I/O happening. If each standalone VM requires 2-5% of a standalone processor to perform Open-FCoE processing, then it could easily represent 5-7 X 2-5% on a 10VM ESX server (assumes some optimization for virtualization, if virtualization degrades driver processing, it could be much worse), which would represent a serious burden.

Now these numbers are just guesses on my part but there is some price to pay for using host server MIPs for every FCoE frame and it does multiply for use with virtualized servers, that much I can guarantee you.

But the (storage) world is better now

Nonetheless, I must applaud Intel’s Open-FCoE thrust as it can open up a whole new potential market space that today’s CNAs maybe couldn’t touch. If it does that, it introduces low-end systems to the advantages of FCoE then as they grow moving their environments to real CNAs should be a relatively painless transition. And this is where the real advantage lies, getting smaller data centers on the right path early in life will make any subsequent adoption of hardware accelerated capabilities much easier.

But is it really open?

One problem I am having with the Intel announcement is the lack of other NIC vendors jumping in. In my mind, it can’t really be “open” until any 10GBE NIC can support it.

Which brings us back to Open-FCoE.org. I checked their website and could see no listing for a Windows driver and there was no NIC compatibility list. So, I am guessing their work has nothing to do with Intel’s driver, at least as presently defined – too bad

However, when Open-FCoE is really supported by any 10GB NIC, then the economies of scale can take off and it could really represent a low-end cost point for storage infrastructure.

Unclear to me what Intel has special in their x520 NIC to support Open-FCoE (maybe some TOE H/W with other special sauce) but anything special needs to be defined and standardized to allow broader adoption by other Vendors. Then and only then will Open-FCoE reach it’s full potential.

—-

So great for Intel, but it could be even better if a standardized definition of an “Open-FCoE NIC” were available, so other NIC manufacturers could readily adopt it.

Comments?