NetApp Analyst Summit Customer Panel – how to survive a category 5 tornado

20120621-085224.jpg
NetApp had three of their customer innovation winners come up on stage for a panel discussion with Dave Hitz moderating the discussion. All three had interesting deployments of NetApp storage systems:

  • Andrew Henderson from ING DIRECT talked about their need to deploy copies of the banks IT environment for test, development, optimization and security testing. This process took 12 weeks to accomplish the first time they tried and only created a single copy. They wanted to speed this up and be able to deploy 10 or more copies if necessary. Andrew looked at Microsoft Hyper-V, System Center and NetApp FlexClones and transformed this process to now generate a copy of the entire banks IT services in under 10 minutes. And since the new capabilities have been in place they have created over 400 copies of the bank (he called these bank-in-a-box) for various purposes.
  • Teresa Wahlert from Iowa Workforce Development Agency was up next and talked about their VDI implementation. Iowa cut their budget which forced them to shut down a number of physical offices. But with VDI, VMware and NetApp storage Workforce were able to disperse their services to over 3000 locations now in prisons, libraries, and other venues where they had no presence before. They put out a general call for all the tired, dying PCs in Iowa government and used these to host VDI services. Now Workforce services are up 7X24 locations, pretty amazing for government work. Apparently they had tried VDI before and their previous storage couldn’t handle it. They moved to NetApp with FlashCache and it worked just fine. That’s when they rolled it VDI services to their customers and businesses. With NetApp they were able to implement VDI, reduce storage costs (via deduplication and other storage efficiency features) and increase department services.
  • Jeff Bell at Mercy Healthcare talked about the difficulties of rolling out electronic health records (EHR) and their challenges of integrating ~30 hospitals and ~400 medical clinics. They started with EHR fairly early 2006-2007 well before the latest governmental push. He mentioned Joplin MO and last years category 5 tornado which about wiped out their hospital there. He said within 2 hours after the disaster, Mercy Healthcare was printing out the EHR for the 183 patients present in the hospital at the time that had to be moved to other care facilities. The promise of EHR is that the information travels with the patient, can be recovered in the event of a disaster and is immediately available.  It seems that at least at Mercy Healthcare, EHR is living up to its promise. In addition, they just built a new data center as they were running out of space, power and cooling at the old one. They installed new NetApp storage there and for the first few months had to run heaters to keep the data center live-able because the new power/cooling load was so far below what they were experienced previously. Looking back on what they had accomplished Jeff was not so sure they would build a new data center again. With new cloud offerings coming out and the reduced power/cooling and increased density of NetApp storage they could almost get by without another data center at all.

That’s about it from the customer session.

NetApp execs spent the rest of the day on innovation, mostly at NetApp but also in the IT industry in general.

There was lots of discussion on the new release of Data ONTAP 8.1.1 with its latest cluster mode features.  NetApp positioned it as fulfilling out the transition to  data/storage as an infrastructure that IT has been pushing for the last decade or so.  Following in the grand tradition of what IBM did for computing infrastructure with the 360 and what Cisco and others did for networking infrastructure in the mid 80’s.

Comments?

Data storage features for virtual desktop infrastructure (VDI) deployments

The Planet Data Center by The Planet (cc) (from Flickr)
The Planet Data Center by The Planet (cc) (from Flickr)

Was talking with someone yesterday about one of my favorite topics, data storage for virtual desktop infrastructure (VDI) deployments.  In my mind there are a few advanced storage features that help considerably with VDI implemetations:

  • Deduplication – almost every one of your virtual desktops will share 75-90% of their O/S disk data with every other virtual desktop.  Having sub-file/sub-block deduplication can be a godsend for all this replicated data and reduce O/S storage requirements considerably.
  • 0 storage snapshots/clones – another solution to the duplication of O/S data is to use some sort of space conserving snapshots.  For example, one creates a master (gold) disk image and makes 100s if not 1000s of snapshots of it, taking almost no additional space.
  • Highly available/highly reliable storage – when you have a lone desktop dependent on DAS for it’s O/S, it doesn’t impact a lot of users if that device fails. However, when you have 100s to 1000s of users dependent on DAS device(s) for their O/S software, any DAS failure could impact all of them at the same time.  As such, one needs to move off DAS and invest in highly reliable and available external storage of some kind to sustain reasonable uptime for your user community.

Those seem to me to be the most important attributes for VDI storage but there are a couple more features/facilities which can also:

  • NAS systems with NFS – VDI deployments will generate lots of VMDKs for all the user desktop C: drives.  Although this can be managed with block level storage as separate LUNs or multi-VMDK LUNs, who want’s to configure a 100 to 1000 LUNs.  NFS files can perform just as well and are much easier to create on the fly and thus, for VDI it’s hard to beat NFS storage.
  • Boot storm enhancements – Another problem with VDI is that everyone gets to work 8am Monday and proceeds to boot up their (virtual) machines, which drives an awful lot of IO to their virtual C: drives.  Deduplication and 0 storage snapshots can help manage the boot storm as long as these characteristics are retained throughout system cache, i.e, deduplication exists in cache as well as on backend disk.  But there are other approaches to the problem as well, available from various vendors to better manage boot storms.
  • Anti-Virus scan enhancements – Similar to boot storms, A-V scans also typically happen around the same time for many desktop users and can be just as bad for virtual C: drive performance.  Again, deduplication or 0 storage snapshots can help (with above caveats) but some vendor storage can offload these activities from the desktop alltogether.  Also last weeks VMworld release of VMware’s vShield Edge (see VMworld 2010 review) also supports some A-V scan enhancements. Any of these approaches should be able to help.

Regular “dumb” block storage will always work but it will require a lot more raw storage, performance will suffer just when everybody gets back to work, and the administrative burden will be much higher.

I may seem biased but enterprise class reliability&availability with some of the advanced storage features described above can help make your deployment of VDI that much better for you and all your knowledge workers.

Anything I missed?

PC-as-a-Service (PCaaS) using VDI

IBM PC Computer by Mess of Pottage (cc) (from Flickr)
IBM PC Computer by Mess of Pottage (cc) (from Flickr)

Last year at VMworld, VMware was saying that 2010 was year for VDI (virtual desktop infrastructure), last week NetApp said that most large NY banks they talked with were looking at implementing VDI and prior to that, HP StorageWorks announced a new VDI reference platform that could support ~1600 VDI images.  It seems that VDI is gaining some serious interest.

While VDI works well for large organizations, there doesn’t seem to be any similar solution for consumers. The typical consumer today usually runs downlevel OS’s, anti-virus, office applications, etc.  and have no time, nor inclination to update such software.  These consumers would be considerably better served with something like PCaaS, if such a thing existed.

PCaaS

Essentially PCaaS would be a VDI-like service offering, using standard VDI tools or something similar with a lightweight kernel, use of local attached resources (printers, usb sticks, scanners, etc.) but running applications that were hosted elsewhere.  PCaaS could provide all the latest O/S and applications and provide enterprise class reliability, support and backup/restore services.

Broadband

One potential problem with PCaaS is the need for reliable broadband to the home. Just like other cloud services, without broadband, none of this will work.

Possibly this could be circumvented if a PCaaS viewer browser application were available (like VMware’s Viewer). With this in place, PCaaS could be supplied over any internet enabled location supporting browser access.   Such a browser based service may not support the same rich menu of local resources as a normal PCaaS client, but it would probably suffice when needed. The other nice thing about a viewer is that smart phones, iPads and other always-on web-enabled devices supporting standard browsers could provide PCaaS services from anywhere mobile data or WI-FI were available.

PCaaS business model

As for a businesses that could bring PC-as-a-Service to life, I see many potential providers:

  • Any current PC hardware vendor/supplier may want to supply PCaaS as it may defer/reduce hardware purchases or rather move such activity from the consumer to companies.
  • Many SMB hosting providers could easily offer such a service.
  • Many local IT support services could deliver better and potentially less expensive services to their customers by offering PCaaS.
  • Any web hosting company would have the networking, server infrastructure and technical know-how to easily provide PCaaS.

This list ignores any new entrants that would see this as a significant opportunity.

Google, Microsoft and others seem to be taking small steps to do this in a piecemeal fashion, with cloud enabled office/email applications. However, in my view what the consumer really wants is a complete PC, not just some select group of office applications.

As described above, PCaaS would bring enterprise level IT desktop services to the consumer marketplace. Any substantive business in PCaaS would free up untold numbers of technically astute individuals providing un-paid, on-call support to millions, perhaps billions of technically challenged consumers.

Now if someone would just come out with Mac-as-a-Service, I could retire from supporting my family’s Apple desktops & laptops…