NetApp’s new NVMeoF/FC AFF & Cloud Data Volumes for every cloud

We attended a NetApp analyst event in their CA HQ last week and they had some interesting announcements as well other information to share. 1st up new faster ONTAP storage.

NVMeoF AFF

NetApp announced this week that their latest generation AFF (All Flash FAS) systems will support FC NVMeoF. We asked if this was just for NVMe SSDs or did it apply to all AFF media. The answer was it’s just another host interface which the customer can license for NVMe SSDs (available only on AFF F800) or SAS SSDs (A700S, A700, and A300). The only AFF not supporting the new host interface is their lowend AFF A220.

As for which NVMeoF, they only support FC at the moment, and it’s our belief that the FC NVMeoF spec is most well defined these days and the FC switch hardware (Brocade-Broadcom since Gen 5, now shipping Gen 6, Cisco not sure) already has NVMeoF support.

NetApp also mentioned support for 100GbE (A800 & A700S only) and 32Gbs FC hardware (all AFF systems but A220). So, presumably they offer NVMeoF for both 32Gbps and 16Gbps FC.

No word on when this will be available for Ethernet FCoE or iSCSI (iNVMe?) but with all the major storage vendors bar one, moving to NVMe SSDs it’s only a matter of time before they also support Ethernet NVMeoF.

As for AFF NVMeoF performance, the answer wasn’t entirely satisfactory. The indication was that the interface reduced response time by 10 usecs or so for NVMe SSDs over SAS SSDs. But I didn’t see any other performance information to substantiate that.

We did see on their AFF datasheet that with NVMe SSDs and NVMeoF FC, the AFF A800 response time was sub 200usec with throughput of 300GB/s (in a 24 node cluster, 12 HA pairs). This means they add only about 100usec for ONTAP data services, a decent trade off from our perspective. Later in their datasheet they say the A800 is capable of 1.3M IOPS and sub-500usec latencies. Unsure why they quoted both numbers.

Cloud Data Volumes

NetApp is taking storage to the cloud. They just announced that NetApp Cloud Data Volumes will be available as a native service under Google Cloud Platform (GCP). NetApp Cloud Data Volume is a storage-as-a-service offering that provides on demand ONTAP file services in the cloud.

For GCP,  both Google and NetApp will be offering the service. Dianne Green, GCP VP said Cloud Data Volumes are a bit like Kubernetes, disruption without disrupting. Customers can easily migrate their onprem file based applications to the cloud without having to worry about the performance of their data or data protection for that matter.

Getting the data there is another matter, but NetApp has other services like CloudSync and someday (maybe for Cloud Data Volumes), SnapMirror, which can help customers move data to and from the cloud.

Currently Cloud Data Volumes are in public preview as an Microsoft Azure Enterprise NFS (and SMB) service. It’s also in beta (I think) in AWS marketplace. And availability on GCP is still restricted. There’s a lot of emphasis at NetApp events on Cloud Data Volumes given its current status on public cloud providers but we think they are trying to gain some experience before they roll it out to the rest of the world.

However,  Jean English, NetApp CMO mentioned that NetApp’s Cloud Data Service business unit has over 1800 customers and currently supports a multi-PB storage footprint in various clouds. Note, this is not just Cloud Data Volumes but comprises all NetApp Cloud Data Services, which includes ONTAP Cloud, NPS, CloudSync, AltaVault, etc. Nonetheless, it’s an impressive indicator of just how far they have come in applying their storage magic to the public cloud in a short time. The hyperscalers (read public cloud providers) say NetApp is 2 or more years ahead of all the other competition and from what we can see, it’s true.

One of the key differentiators between NetApp Cloud Data Volumes and ONTAP Cloud is performance SLAs. Cloud Data Volume customers can select and purchase a specified performance SLA. We believe it comes at three levels and is normally purchased on a pay as you go, consumption based, service offering. However, it’s also available to be billed periodically, other purchase options may be available as well.

When asked what storage was behind the service, the only thing NetApp would confirm was that it was ONTAP storage, present in public cloud data centers in various regions. So Cloud Data Volumes is available in only specific regions but I would expect that to expand over time.

Data Visualization Center

They also christened their new Data Visualization Center (DVC) and we had a multi-course meal at the Bistro at the center. The DVC had a wrap around, 1.5 floor tall screen which showed some of NetApp customer success stories. Inside the screen was a more immersive setting and there was plenty of VR equipment in work spaces alongside customer conference rooms.

Full Disclosure: NetApp paid for all our travel, hotel and food during the analyst event and gave us all Google Home Minis as going away presents and NetApp is a long time customer of my firm.

AWS vs. Azure security setup for Linux

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

I have been doing some testing with both Azure and Amazon Web Services (AWS) these last few weeks and have observed a significant difference in the security setups for both of these cloud services, at least when it comes to Linux compute instances and cloud storage.

First, let me state at the outset, all of my security setups for both AWS and Amazon was done through using the AWS console or the Azure (classic) portal. I believe anything that can be done with the portal/console for both AWS and Azure can also be done in the CLI or the REST interface. I only used the portal/console for these services, so can’t speak to the ease of using AWS’s or Azure’s CLI or REST services.

For AWS

EC2 instance security is pretty easy to setup and use, at least for Linux users:

  • When you set up an (Linux) EC2 instance you are asked to set up a Public Key Infrastructure file (.pem) to be used for SSH/SFTP/SCP connections. You just need to copy this file to your desktop/laptop/? client system. When you invoke SSH/SFTP/SCP, you use the “-i” (identity file) option and specify the path to the (.PEM) certificate file. The server is already authorized for this identity. If you lose it, AWS services will create another one for you as an option when connecting to the machine.
  • When you configure the AWS instance, one (optional) step is to configure its security settings. And one option for this is to allow connections only from ‘my IP address’, how nice. You don’t even have to know your IP address, AWS just figures it out for itself and configures it.

That’s about it. Unclear to me how well this secures your EC2 instance but it seems pretty secure to me. As I understand it, a cyber criminal would need to know and spoof your IP address to connect to or control remotely the EC2 instance. And if they wanted to use SSH/SFTP/SCP they would either have to access to the identity file. I don’t believe I ever set up a password for the EC2 instance.

As for EBS storage, there’s no specific security associated with EBS volumes. Its security is associated with the EC2 instance it’s attached to. It’s either assigned/attached to an EC2 instance and secured there, or it’s unassigned/unattache. For unattached volumes, you may be able to snapshot it (to an S3 bucket within your administration control) or delete it (if it’s unattached, but for either of these you have to be an admin for the EC2 domain.

As for S3 bucket security, I didn’t see any S3 security setup that mimicked the EC2 instance steps outlined above. But in order to use AWS automated billing report services for S3, you have to allow the service to have write access to your S3 buckets. You do this by supplying an XML-like security policy, and applying this to all S3 buckets you wish to report on (or maybe it’s store reports in). AWS provides a link to the security policy page which just so happens to have the XML-like file you will need to do this. All I did was copy this text and insert it into a window that was opened when I said I wanted to apply a security policy to the bucket.

I did find that S3 bucket security, made me allow public access (I think, can’t really remember exactly) to the S3 bucket to be able to list and download objects from the bucket from the Internet. I didn’t like this, but it was pretty easy to turn on. I left this on. But this PM I tried to find it again (to disable it) but couldn’t seem to locate where it was.

From my perspective all the AWS security setup for EC2 instances, storage, and S3 was straightforward to use and setup, it seemed pretty secure and allowed me to get running with only minimal delay.

For Azure

First, I didn’t find the more modern, new Azure portal that useful but then I am a Mac user, and it’s probably more suitable for Windows Server admins. The classic portal was as close to the AWS console as I could find and once I discovered it, I never went back.

Setting up a Linux compute instance under Azure was pretty easy, but I would say the choices are a bit overwhelming and trying to find which Linux distro to use was a bit of a challenge. I settled on SUSE Enterprise, but may have made a mistake (EXT4 support was limited to RO – sigh). But configuring SUSE Enterprise Linux without any security was even easier than AWS.

However, Azure compute instance security was not nearly as straightforward as in AWS. In fact, I could find nothing similar to securing your compute instance to “My IP” address like I did in AWS. So, from my perspective my Azure instances are not as secure.

I wanted to be able to SSH/SFTP/SCP into my Linux compute instances on Azure just like I did on AWS. But, there was no easy setup of the identity file (.PEM) like AWS supported. So I spent some time, researching how to create a Cert file with the Mac (didn’t seem able to create a .PEM file). Then more time researching how to create a Cert file on my Linux machine. This works but you have to install OpenSSL, and then issue the proper “create” certificate file command, with the proper parameters. The cert file creation process asks you a lot of questions, one for a pass phrase, and then for a network (I think) phrase. Of course, it asks for name, company, and other identification information, and at the end of all this you have created a set of cert files on your linux machine.

But there’s a counterpart to the .pem file that needs to be on the server to authorize access. This counterpart needs to be placed in a special (.ssh/authorized) directory and I believe needs to be signed by the client needing to be authorized. But I didn’t know if the .cert, .csr, .key or .pem file needed to be placed there and I had no idea how to” sign it”. After spending about a day and a half  on all this, I decided to abandon the use of an identity file and just use a password. I believe this provides less security than an identity file.

As for BLOB storage, it was pretty easy to configure a PageBlob for use by my compute instances. It’s security seemed to be tied to the compute instance it was attached to.

As for my PageBlob containers, there’s a button on the classic portal to manage access keys to these. But it said once generated, you will need to update all VMs that access these storage containers with the new keys. Not knowing how to do that. I abandoned all security for my container storage on Azure.

So, all in all, I found Azure a much more manual security setup for Linux systems than AWS and in the end, decided to not even have the same level of security for my Linux SSH/SFTP/SCP services that I did on AWS. As for container security, I’m not sure if there’s any controls on the containers at this point. But I will do some more research to find out more.

In all fairness, this was trying to setup a Linux machine on Azure, which appears  more tailored for Windows Server environments. Had I been in an Active Directory group, I am sure much of this would have been much easier. And had I been configuring Windows compute instances instead of Linux, all of this would have also been much easier, I believe.

~~~~

All in all, I had fun using AWS and Azure services these last few weeks, and I will be doing more over the next couple of months. So I will let you know what else I find as significant differences between AWS and Azure. So stay tuned.

Comments?

The data is the hybrid cloud

CRKtHnqVEAABeviI have been at NetApp Insight2015 conference the past two days and have been struck with one common theme. They have been talking since the get-go about the Data Fabric and how Clustered Data ONTAP (cDOT) is the foundation to the NetApp Data Fabric which spans on premises, private cloud, off premises public cloud and everything in between.

But the truth of the matter is that it’s data that real needs to span all these domains. Hybrid cloud really needs to have data movement everywhere. NetApp cDOT is just the enabler that helps move the data around much easier.

NetApp cDOT data services

From a cDOT perspective, NetApp has available today:

  • Cloud ONTAP – a software defined ONTAP storage service executing in the cloud, operating on cloud server provider hardware using DAS storage and providing ONTAP data services for your private cloud resident data.
  • ONTAP Edge – similar to Cloud ONTAP, but operating on premises with customer commodity server & DAS hardware and providing ONTAP data services.
  • NetApp Private Storage (NPS) – NetApp storage systems operating in a “near cloud” environment that is directly connected to cloud service providers that provides NetApp storage services with low latency/high IOPs storage to cloud compute applications.
  • NetApp cDOT on premises storage hardware – NetApp storage hardware with All Flash FAS as well as normal disk-only and hybrid FAS storage hardware supplying ONTAP data services to on premises applications.

NetApp Data Fabric

NetApp’s Data Fabric is built on top of ONTAP data services and allows a customer to use any of the above storage instances to host their private data. Which is great in and of itself, but when you realize that a customer can also move their data from anyone of these ONTAP storage instances to any other storage instance that’s when you see the power of the Data Fabric.

The Data Fabric depends mostly on storage efficient ONTAP SnapMirror data replication and ONTAP data cloning capabilities. These services can be used to replicate ONTAP data (LUNs/volumes) from one cDOT storage instance to another and then use ONTAP data cloning services to create accessible copies of this data at the new location. This could be on premises to near cloud, to public cloud or back again, all within the confines of ONTAP data services.

Data Fabric in action

Now I like the concept but they also showed an impressive demo of using cDOT and AltaVault (NetApp’s solution acquired last year from Riverbed, their SteelStor backup appliance) to perform an application consistent backup of a SQL database. But once they had this it went a little crazy.

They SnapMirrored this data from the on premises storage to a near cloud, NPS storage instance, then cloned the data from the mirrors and after that fired up applications running in Azure to process the data. Then they shut down the Azure application and fired up a similar application in AWS using the exact same NPS hosted data. Of course they then SnapMirrored the same backup data (I think from the original on premises storage) to Cloud ONTAP, just to show it could be done there as well.

Ok I get it, you can replicate (mirror) data from any cDOT storage instance (whether on premises or remote site or near cloud NPS or in the cloud using Cloud ONTAP or …). Once there you can clone this data and use it with applications running in any environment running with access to this data instance (such as AWS, Azure and cloud service providers).

And I like the fact that all this can be accomplished in NetApp’s Snap Center software. And I especially like the fact that the clones don’t take up any extra space and the replicant mirroring is done in a quick, space efficient (read deduped) manner

But, having to setup a replication or mirror association between cDOT on premises and cDOT at NPS or Cloud ONTAP and then having to clone the volumes at the target side seems superflous. What I really want to do is just copy or move the data and have it be at the target site without the mirror association in the middle. It’s almost like what I want is CLONE that operates across cDOT storage instances wherever they reside.

Well I’m an analyst and don’t have to implement any of this (thank god). But what NetApp seems to have done is to use their current tools and ONTAP data service capabilities to allow customer data to move anywhere it needs to be, in  customer controlled, space efficient, private and secure manner. Once hosted at the new site, applications have access to this data and customers still have all the ONTAP data services they had on premises but in cloud and near cloud locations.

Seems pretty impressive to me for all of a customer’s ONTAP data. But when you combine the Data Fabric with Foreign LUN Import (importing non-NetApp data into ONTAP storage) and FlexArray (storage virtualization under ONTAP) you can see how all the Data Fabric can apply to non-NetApp storage instances as well and then it becomes really interesting.

~~~~

There was a company that once said that “The Network is the Computer” but today, I think a better tag line is “The Data is the Hybrid Cloud”.

Comments?

 

Google cloud offers SSD storage

Read an article the other day on Google Cloud tests out fast, high I/O SSD drives. I suppose it was only a matter of time before cloud services included SSDs in their I/O mix.

Yet, it doesn’t seem to me to be as simple as adding SSDs to the storage catalog. Enterprise storage vendors have had SSDs arguably since January of 2008 (see my EMC introduced SSDs to DMX dispatch). And although there are certainly a class of applications that can take advantage of SSD low latency/high IOPs, the vast majority of applications don’t seem to require these services.

Storage systems use of SSDs today

That’s why most enterprise storage system vendors support some form of automated storage tiering or flash caching of normal I/O for their high-end storage systems. Together with offering just plain old SSDs as data storage. In this more sophisticated solution customers have the option to assign application data to SSDs only, hybrid SSD-disks, or disk only storage. In this way the customer get’s to decide whether they want some sort of mix or just pure SSD or disk IO to satisfy their application IO requirements.

Storage startups have emerged that take on both the hybrid SSD-disk and all-flash model and add quality of service to the picture. An example of all-flash that supplies QoS version of all-flash storage is SolidFire (learn more about SolidFire in our GreyBeardsOnStorage podcast with Dave Wright).  An example that does the same sort of thing for hybrid storage is Fusion IOcontrol (formerly NexGen) storage.

Storage system QoS

In the case of SolidFire one can limit volume or volume groups with an IOPs max, throughput max, and a Burst max. The burst is sort of a credit that accrues on a time basis if the application doesn’t ask for the maximum IOPs/Througput which they then can consume above their maximums up to the burst max for a limited timeframe.

QoS capabilities are slowly making their way into enterprise storage systems as well but it will take some time for the instrumentation and capabilities to be put in place. But one can see limited QoS in IBM DS8000 priority IO, NetApp Storage QoS, EMC Unisphere QoS manager for VNX & SMC QoS for VMAX, and HDS SVOS QoS via partitioning. Most of these capabilities control access or partition cache, backend and frontend resources for host volumes. As such, they are not nearly as sophisticated or as easy to use as what SolidFire and other start ups are offering, but they are getting there.

Cloud SSD pricing

Back to the cloud offering. According to the GigaOm article, Google SSD volumes can sustain up to 15K IOPs and they are charging a premium price for this storage ($0.325/GB-month). Apparently Amazon AWS offers high IO EC2 storage as well with a maximum of 4K IOPs but charges a premium both for the storage ($0.125/GB month) and on an IOPs basis ($0.10/IOPS-month). GigaOM had a pricing comparison for 500GB and 2000 IOPs indicating that Google SSD storage would cost $163/month and the AWS provisioned SSD storage would cost $263 ($62.50 for storage and $200 for the 2000 IOPs).

The fact that you can drive the Google SSD to it’s limits without incurring any extra cost seems a serious advantage to me and would be very appealing to me to most enterprise customers.

But where’s latency

It seems to me after some IOPs level is attained, most mission critical applications are more interested in low latency IO (for more on why low latency matters seem my IO throughput vs. low latency post…). Many storage systems are capable of maximum of 100,000s of IOPS but most shops don’t run them that hard, ever. But with proper use of SSDs, most enterprise storage is now clocking IO at sub-msec. low latency IO.

However, I have yet to see any Cloud storage pricing or QoS for that matter that was based on latency guarantees.  I think this is a serious omission.

In any event, SSDs in the cloud is a good think now they just need to offer flash caching, automatic storage tiering and sophisticated QoS.  I realize this is partially re-inventing enterprise storage in the cloud but isn’t that what everyone actually wants, at cloud storage pricing of course.

Comments?

Peak server, the cloud & NetApp storage for AWS

I was at a conference a month or so ago and one speaker mentioned that the number of x86 servers being sold has peaked and is dropping. I can imagine a number of reasons for this and the main one being server virtualization. But this speaker had a different view and it seemed to be the cloud.

Peak server is here.

He said that three companies were purchasing over 1/2 the x86 servers these days. I feel that there should be at least four Google, Facebook, Amazon & Microsoft and maybe five, if you add in Apple.

Something has happened over the past year or so. Enterprise IT has continued along its merry way but the adoption of cloud services is starting to take off.

I have seen this before, with mainframes, then mini-computers, and now client-server. Minicomputers came out and were so easy to use and develop/deploy applications on, that people stopped creating new apps on the mainframe. Mainframes never died out, and probably have never really stopped shipping increasing MIPS every year. But the share of WW MIP installations for mainframes has been shrinking for decades and have never got going again.

Ultimately, the proprietary minicomputer was just a passing fad and only lasted about 25 years or so. It was wounded by the PC, and then killed off by proprietary Unix workstations.

Then it happened again, the new upstart this time was Windows Server and Linux. Once again it was just easier to build apps on these new and cheaper servers, than any of the older Unix servers. Of course there’s still plenty of business in proprietary Unix servers, but again I would venture to say that their share of WW installed MIPS has been shrinking for a long time.

Nowadays, the cloud is mortally wounding the server market. Server virtualization is helping a lot but it’s also enabling the cloud to eliminate many physical server sales. This is because new applications, new IT environments are being ported/moved/deployed onto the cloud.

Peak server means less enterprise networking, storage and server hardware

In this new, cloud world, customers need less servers, less networking and less enterprise class storage. Yes not every application is suitable to cloud deployment but that’s why there’s still mainframes, still Unix servers, and a continuing need for standalone, physical or virtual x86 servers in the enterprise. But their share of MIPs will start shrinking soon if it hasn’t already.

Ok, so enterprise data center share of MIPs will start shrinking vis a vis cloud data centers. But what happens to networking and storage. My view is that networking becomes software defined and there’s a component of that which operates on special purpose hardware. This will increase in shipments but the more complex, enterprise class networking equipment will flatline and never see any more substantial growth.

And up until yesterday I felt much the same about enterprise class storage. Software defined storage in my future, DAS and SSDs for the capacity and the smarts exist in software if at all. Today, most of the cloud and many service providers have been moving off enterprise class storage and onto DAS.

NetApp’s new enterprise storage in AWS

But yesterday I heard about NetApp private storage for the cloud. This is a configuration of NetApp storage installed in a CoLo facility with a “direct connection” to Amazon compute cloud. In this way, enterprise customers can maintain data stewardship/ownership/governance over their data while at the same time deploying applications onto AWS compute cloud.

This seems to be one of the sticking points to enterprise customers adopting the cloud. By having (data) storage owned lock/stock&barrel by the enterprise it seems much easier and less risky to deploy new and old applications to the cloud.

Whether this pans out and can provide enough value to cover the added expense of the enterprise class storage, only the market can decide. But this is the first time I can remember, where any vendor has articulated a role for enterprise class storage in the cloud. Let’s hope it works.

Image: PDP8/s by ajmexico

Enterprise file synch

Strange Clouds by michaelroper (cc) (from Flickr)
Strange Clouds by michaelroper (cc) (from Flickr)

Last fall at SNW in San Jose there were a few vendors touting enterprise file synchronization services each having a slightly different version of the requirements.   The one that comes most readily to mind was Egnyte which supported file synchronization across a hybrid cloud (public cloud and network storage) which we discussed in our Fall SNWUSA wrap up post last year.

The problem with BYOD

With bring your own devices (BYOD) corporate end users are quickly abandoning any pretense of IT control and turning consumer class file synchronization services to help  synch files across desktop, laptop and all mobile devices they haul around.   But the problem with these solutions such as DropBoxBoxOxygenCloud and others are that they are really outside of IT’s control.

Which is why there’s a real need today for enterprise class file synchronization solutions that exhibit the ease of use and set up available from consumer file synch systems but  offer IT security, compliance and control over the data that’s being moved into the cloud and across corporate and end user devices.

EMC Syncplicity and EMC on premises storage

Last week EMC announced an enterprise version of their recently acquired Syncplicity software that supports on-premises Isilon or Atmos storage, EMC’s own cloud storage offering.

In previous versions of Syncplicity storage was based in the cloud and used Amazon Web Services (AWS) for cloud orchestration and AWS S3 for cloud storage. With the latest release, EMC adds on premises storage to host user file synchronization services that can span mobile devices, laptops and end user desktops.

New Syncplicity users must download desktop client software to support file synchronization or mobile apps for mobile device synchronization.  After that it’s a simple matter of identifying which if any directories and/or files are to be synchronized with the cloud and/or shared with others.

However, with the Business (read enterprise) edition one also gets the Security and Compliance console which supports access control to define users and devices that can synchronize or share data, enforce data retention policies, remote wipe corporate data,  and native support for single sign services. In addition, one also gets centralized user and group management services to grant, change, revoke user and group access to data.  Also, one now obtains enterprise security with AES-256 data-at-rest encryption, separate key manager data centers and data storage data centers, quadruple replication of data for high disaster fault tolerance and SAS70 Type II compliant data centers.

If the client wants to use on premises storage, they would also need to deploy a VM virtual appliance somewhere in the data center to act as the gateway to file synchronization service requests. The file synch server would also presumably need access to the on premises storage and it’s unclear if the virtual appliance is in-band or out-of-band (see discussion on Egnyte’s solution options below).

Egnyte’s solution

Egnyte comes as a software only solution building a file server in the cloud for end user  storage. It also includes an Egnyte app for mobile hardware and the ever present web file browser.  Desktop file access is provided via mapped drives which access the Egnyte cloud file server gateway running as a virtual appliance.

One major difference between Syncplicity and Egnyte is that Egnyte offers a combination of both cloud and on premises storage but you cannot have just on premises storage. Syncplicity only offers one or the other storage for file data, i.e., file synchronization data can only be in the cloud or on local on premises storage but cannot be in both locations.

The other major difference is that Egnyte operates with just about anybody’s NAS storage such as EMC, IBM, and HDS for the on premises file storage.  It operates as an in-band, software appliance solution that traps file activity going to your on premises storage. In this case, one would need to start using a new location or directory for data to be synchronized or shared.

But for NetApp storage only (today), they utilize ONTAP APIs to offer out-of-band file synchronization solutions.  This means that you can keep NetApp data where it resides and just enable synchronization/shareability services for the NetApp file data in current directory locations.

Egnyte promises enterprise class data security with AD, LDAP and/or SSO user authentication, AES-256 data encryption and their own secure data centers.  No mention of separate key security in their literature.

As for cloud backend storage, Egnyte has it’s own public cloud or supports other cloud storage providers such as AWS S3, Microsoft Azure, NetApp Storage Grid and HP Public Cloud.

There’s more to Egnyte’s solution than just file synchronization and sharing but that’s the subject of today’s post. Perhaps we can cover them at more length in a future post if their interest.

File synchronization, cloud storage’s killer app?

The nice thing about these capabilities is that now IT staff can re-gain control over what is and isn’t synched and shared across multiple devices.  Up until now all this was happening outside the data center and external to IT control.

From Egnyte’s perspective, they are seeing more and more enterprises wanting data both on premises for performance and compliance as well as in the cloud storage for ubiquitous access.  They feel its both a sharability demand between an enterprise’s far flung team members and potentially client/customer personnel as well as a need to access, edit and propagate silo’d corporate information using new mobile devices that everyone has these days.

In any event, Enterprise file synchronization and sharing is emerging as one of the killer apps for cloud storage.  Up to this point cloud gateways made sense for SME backup or disaster recovery solutions but IMO, didn’t really take off beyond that space.  But if you can package a robust and secure file sharing and synchronization solution around cloud storage then you just might have something that enterprise customers are clamoring for.

~~~~

Comments?