7 grand challenges for the next storage century

Clock tower (4) by TJ Morris (cc) (from flickr)
Clock tower (4) by TJ Morris (cc) (from flickr)

I saw a recent IEEE Spectrum article on engineering’s grand challenges for the next century and thought something similar should be done for data storage. So this is a start:

  • Replace magnetic storage – most predictions show that magnetic disk storage has another 25 years and magnetic tape another decade after that before they run out of steam. Such end-dates have been wrong before but it is unlikely that we will be using disk or tape 50 years from now. Some sort of solid state device seems most probable as the next evolution of storage. I doubt this will be NAND considering its write endurance and other long-term reliability issues but if such issues could be re-solved maybe it could replace magnetic storage.
  • 1000 year storage – paper can be printed today with non-acidic based ink and retain its image for over a 1000 years. Nothing in data storage today can claim much more than a 100 year longevity. The world needs data storage that lasts much longer than 100 years.
  • Zero energy storage – today SSD/NAND and rotating magnetic media consume energy constantly in order to be accessible. Ultimately, the world needs some sort of storage that only consumes energy when read or written or such storage would provide “online access with offline power consumption”.
  • Convergent fabrics running divergent protocols – whether it’s ethernet, infiniband, FC, or something new, all fabrics should be able to handle any and all storage (and datacenter) protocols. The internet has become so ubiquitous becauset it handles just about any protocol we throw at it. We need the same or something similar for datacenter fabrics.
  • Securing data – securing books or paper is relatively straightforward today, just throw them in a vault/safety deposit box. Securing data seems simple but yet is not widely used today. It doesn’t have to be that way. We need better, more long lasting tools and methodology to secure our data.
  • Public data repositories – libraries exist to provide access to the output of society in the form of books, magazines, papers and other printed artifacts. No such repository exists today for data. Society would be better served if we could store and retrieve data if there were library like institutions could store data. Most of these issues are legal due to data ownership but technological issues exist here as well.
  • Associative accessed storage – Sequential and random access have been around for over half a century now. Associative storage could complement these and be another approach allowing storage to be retrieved by its content. We can kind of do this today by keywording and indexing data. Biological memory is accessed associations or linkages to other concepts, once accessed memory seem almost sequentially accessed from there. Something comparable to biological memory may be required to build more intelligent machines.

Some of these are already being pursued and yet others receive no interest today. Nonetheless, I believe they all deserve investigation, if storage is to continue to serve its primary role to society, as a long term storehouse for society’s culture, thoughts and deeds.

Comments?

Storage strategic inflection points

EMC vs S&P 500 Stock price chart
EMC vs S&P 500 Stock price chart - 20 yrs from Yahoo Finance

Both EMC and Spectra Logic celebrated their 30 years in business this month and it got me to thinking. Both companies started the same time but one is a ~$14B revenue (’09 projected) behemoth and the other a relatively successful, but relatively mid-size storage company (Spectra Logic is private and does not report revenues). What’s the big difference between these two. As far as I can tell both companies have been adequately run for some time now by very smart people. Why is one two or more orders of magnitude bigger than the other – recognizing strategic inflection points is key.

So what is a strategic inflection point? Andy Grove may have coined the term and calls a strategic inflection point a point “… where the old strategic picture dissolves and gives way to the new.” In my view EMC has been more successful at recognizing storage strategic inflection points than Spectra Logic and this explains a major part of their success.

EMC’s history in brief

In listening this week to Joe Tucci’s talk at EMC Analyst Days he talked about the rather humble beginnings of EMC. It started out selling furniture and memory for mainframes (I think) but Joe said it really took off in 1991, almost 12 years after it was founded. It seems they latched onto some DRAM based SSD like storage technology and converted it to use disk as a RAID storage device in the mainframe and later open systems arena. RAID killed off the big (14″ platter) disk devices that had dominated storage at that time and once started could not be stopped. Whether by luck or smarts EMC’s push into RAID storage made them what they are today – probably a little of both.

It was interesting to see how this played out in the storage market space. RAID used smaller disks, first 8″, then 5.25″ and now 3.5″. When first introduced, manufacturing costs for the RAID storage were so low that one couldn’t help but make a profit selling against big disk devices that held 14″ platters. The more successful RAID became, the more available and reliable the smaller disks became which led to a virtuous cycle culminating in the highly reliable 3.5″ disk devices available today. Not sure Joe was at EMC at the time but if he was he would probably have called that transition between big platter disks and RAID a “strategic inflection point” in the storage industry at the time.

Most of EMC’s competitors and customers would probably say that aggressive marketing also helped propel EMC to be the top of the storage heap. I am not sure which came first, the recognition of a strategic inflection like RAID or the EMC marketing machine but, together, they gave EMC a decided advantage that re-constructed the storage industry.

Spectra Logic’s history in brief

As far as I can tell Spectra Logic has been in the backup software for a long time and later started supporting tape technology where they are well known today. Spectra Logic has disk storage systems as well but they seem better known for their tape and backup technology.

The big changes in tape technology over the past 30 years have been tape cartridges and robotics. Although tape cartridges were introduced by IBM (for the IBM 3480 in 1985), the first true tape automation was introduced by Storage Technology Corp. (with the STK 4400 in 1987). Storage Technology rode the wave of the robotics revolution throughout the late 80’s into the mid 90’s and was very successful for a time. Spectra Logic’s entry into tape robotics was sometime later (1995) but by the time they got onboard it was a very successful and mature technology.

Nonetheless, the revolution in tape technology and operations brought on by these two advances, probably held off the decline in tape for a decade or two, and yet it could not ultimately stem the tide in tape use apparent today (see my post on Repositioning of tape). Spectra Logic has recently introduced a new tape library.

Another strategic inflection point that helped EMC

Proprietary “Open” Unix systems had started to emerge in the late 80’s and early 90’s and by the mid 90’s were beginning to host most new and sophisticated applications. The FC interface also emerged in the early to mid 90’s as a replacement to HPC-HPPI technology and for awhile battled it out against SSA technology from IBM but by 1997 emerged victorious. Once FC and the follow-on higher level protocols (resulting in SAN) were available, proprietary Unix systems had the IO architecture to support any application needed by the enterprise and they both took off feeding on each other. This was yet another strategic inflection point and I am not sure if EMC was the first entry into this market but they sure were the biggest and as such, quickly emerged to dominate it. In my mind EMC’s real accelerated growth can be tied to this timeframe.

EMC’s future bets today

Again, today, EMC seems to be in the fray for the next inflection. Their latest bets are on virtualization technology in VMware, NAND-SSD storage and cloud storage. They bet large on the VMware acquisition and it’s working well for them. They were the largest company and earliest to market with NAND-SSD technology in the broad market space and seem to enjoy a commanding lead. Atmos is not the first cloud storage service out there, but once again EMC was one of the largest companies to go after this market.

One can’t help but admire a company that swings for the bleachers every time they get a chance at bat. Not every one is going out of the park but when they get ahold of one, sometimes they can change whole industries.

The price of quality

At HPTechDay this week we had a tour of the EVA test lab, in the south building of HP’s Colorado Springs Facility. I was pretty impressed and I have seen more than my fair share of labs in my day.

Tony Green HP's EVA Lab Manager
Tony Green HP's EVA Lab Manager
The fact that they have 1200 servers and 500 EVA arrays was pretty impressive but they also happen to have about 20PB of storage over that 500 arrays. In my day a couple of dozen arrays and a 100 or so servers seemed to be enough to test a storage subsystem.

Nowadays it seems to have increased by an order of magnitude. Of course they have sold something like 70,000 EVAs over the years and some of these 500 arrays happen to be older subsystems used to validate problems and debug issues for current field population.

Another picture of the EVA lab with older EVAs
Another picture of the EVA lab with older EVAs

They had some old Compaq equipment there but I seem to have flubbed the picture of that equipment. This one will have to suffice. It seems to have both vertically and horizontally oriented drive shelves. I couldn’t tell you which EVAs these were but as they were earlier in the tour, I figured they were older equipment. It seemed as you got farther into the tour you moved closer to the current iterations of EVA. It seemed like an archive dig in reverse instead of having the most current layers/levels first they were last.

I asked Tony how many FC ports he had and he said it was probably easiest to count the switch ports and double them but something in the thousands seemed reasonable.

FC switch rack with just a small selection of switch equipment
FC switch rack with just a small selection of switch equipment

There were parts of the lab which were both off limits to cameras and to bloggers which was deep into the bowels of the lab. But we were talking about some of the remote replication support that EVA had and how they tested this over distance. Tony said they had to ship their reel of 100 miles of FC up north (probably for some other testing) but he said they have a surragate machine which can be programmed to create the proper FC delay to meet any required distances.

FC delay generator box
FC delay generator box

The blue box in the adjacent picture seemed to be this magic FC delay inducer box. Had interesting lights on it.

Nigel Poulton of Ruptured Monkeys and Devang Panchigar of StorageNerve Blog were also on the tour taking pictures&video. You can barely make out Devang in the picture next to Nigel. Calvin Zito from HP StorageWorks Blog was also on tour but not in any of my pictures.

Nigel and Devang (not pictured) taking videos on EVA lab tour
Nigel and Devang (not pictured) taking videos on EVA lab tour

Throughout our tour of the lab I can say I only saw one logic analyzer although I am sure there were plenty more in the off limits area.

Lonely logic analyzer in EVA lab
Lonely logic analyzer in EVA lab
During HPTechDay they hit on the topic of storage-server convergence and the use of commodity, X86 hardware for future storage systems. From the lack of logic analyzers I would have to concur with this analysis.

Nonetheless, I saw some hardware workstations although this was another lonely workstation sorrounded in a sea of EVAs.

Hardware workstation in the EVA lab, covered in parts and HW stuff
Hardware workstation in the EVA lab, covered in parts and HW stuff
Believe it or not I actually saw one stereo microscope but failed to take a picture of it. Yet another indicator of hardware descent and my inadequacies as a photographer.

One picture of an EVA obviously undergoing some error injection test with drives tagged as removed and being rebuilt or reborn as part of RAID testing.

Drives tagged for removal during EVA test
Drives tagged for removal during EVA test
In my day we would save particularly “squirrelly drives” from the field and use them to verify storage subsystem error handling. I would bet anything these tagged drives had specific error injection points used to validate EVA drive error handling.

I could go on and I have a couple of more decent lab pictures but you get the jist of the tour.

For some reason I enjoy lab tours. You can tell a lot about an organization by how their labs look, how they are manned, organized and set up. What HP’s EVA lab tells me is that they spare no expense to insure their product is literally bulletproof, bug proof, and works every time for their customer base. I must say I was pretty impressed.

At the end of HPTechDay event Greg Knieriemen of Storage Monkeys and Stephen Foskett of GestaltIT hosted an InfoSmack podcast to be broadcast next Sunday 10/4/2009. There we talked a little more on commodity hardware versus purpose built storage subsystem hardware, it was a brief, but interesting counterpoint to the discussions earlier in the week and the evidence from our portion of the lab tour.

The coming hard drive capacity wall?

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
Hard drives have been on a capacity tear lately what with perpendicular magnetic recording and tunneling magnetoresistive heads. As evidence of this, Seagate just announced their latest Barracuda XT, a 2TB hard drive with 4 platters with ~500GB/platter at 368Gb/sqin recording density.

Read-head technology limits

Recently, I was at a Rocky Mountain IEEE Magnetics Society seminar where Bruce Gurney, Ph. D., from Hitachi Global Storage Technologies (HGST) said there was no viable read head technology to support anything beyond 1Tb/sqin recording densities. Now in all fairness this was a public lecture and not under NDA but it’s obvious the (read head) squeeze is on.

Assuming, it’s a relatively safe bet that densities achieving ~1Tb/sqin can be attained with today’s technology, that means another ~3X in drive capacity is achievable using current read-heads. Hence, for a 4 platter configuration, we can expect a ~6TB drive in the near future using today’s read-heads.

But what happens next?

Unclear to me how quickly drive vendor deliver capacity increases these days, but in recent past a doubling in capacity occurred every 18-24 months. Assuming this holds for today’s technology, somewhere in the next 24-36 months or so we will see a major transition to a new read-head technology.

Thus, for today’s drive industry must spend major R&D $’s to discover, develop, and engineer a brand new read-head technology to take drive capacity to the next chapter. What this means for write heads, media smoothness, and head flying height is unknown.

Read-head development

At the seminar mentioned above, Bruce had some interesting charts which showed how long previous read-head technology took to develop and how long they were used to produce drives. According to Bruce, recently it’s been taking about 9 years for read-head technology from discovery to drive production. However, while in the past a new read-head technology would last for 10 years or more in production, nowadays they appear to be lasting only 5 to 6 years before read-head technology changes in production. Thus, it takes twice as long to develop read-head technology as its used in production, which means that the R&D expense must be amortized over a shorter timeframe. If anything, from my perspective, the production runs for new read-head technology seems to be getting shorter?!

Nonetheless, most probably, HGST and others have a new head up their sleeve but are not about to reveal it until the time comes to bring it out in a new hard drive.

Elephant in the room

But that’s not the problem. If production runs continue to shrink in duration and the cost and time of developing new heads doesn’t shrink accordingly someday the industry must eventually reach a drive capacity wall. Now this won’t be because some physical magnetic/electronic/optical constraint has been reached but because a much more fundamental, inherent economic constraint has been reached – it just costs too much to develop new read-head technology and no one company can afford it.

There are a couple of ways out of this death spiral that I see

  • Lengthen read-head production duration,
  • Decrease the cost/time to develop new read-heads
  • Create a industry wide read-head technology consortium that can track and fund future read-head development

More than likely we will need some combination of all of these solutions if the drive industry is to survive for long.

What if there were no backup?

Data Center by Mathieu Ramage (Flickr)
Data Center by Mathieu Ramage (Flickr)
If backup didn’t exist and you had to start over to protect your data how would you do it today?

I think four things are important to protect data in today’s data center:

  • Any data ever created in the data center or on-the-road needs to be protected,
  • Data restores must be under end-user control,
  • Data needs to be copied/replicated/mirrored offsite to support disaster recovery,
  • Multiple data copies should exist only to satisfy some data protection policy – one copy is mandatory, two copies (not co-located) would be required to support higher availability, and
  • Data protection activities should not interfere with or interrupt ongoing data center operations

All this can and is being done with backup and other systems today but most of these products and features grew out of earlier phases of computing. With today’s technology many of these capabilities may no longer be necessary today if one could just rethink data protection from the ground up.

Data Versioning

I think some form of data/file/block easily versioning could easily support the requirement of restoring any data ever created. Versioning systems have existed in the past and could certainly be re-constituted today with some sort of standards. The cost of storing all that data might be a concern but storage costs continue to decrease and if multiple copies retained for data protection can be eliminated, it might just be a wash. Versioning could just as easily be provided for the labtop and once new versions of data are created old versions could be moved off the laptop to the data center for safekeeping and to free up space.

End-user visiblility

End-user restoration requires some facility to explore the end-users data protection file-name and block space. Once this is available, identifying which version needs to be restored and where to restore it should be straightforward. All backup applications provide a backup directory and a few even allow end-user access to perform data restores. While all this works well with files, having an end-user do this for block storage would require more sophistication. Nonetheless, both file and block restores seems entirely feasible once data versioning is in place.

Ubiquitous replication

The requirement to have data copies offsite is certainly feasible today. Replication can be done in hardware or software today, synchronously, semi-synchronously, and/or asynchronously. Replication today can solve this problem but replicating to separate data centers cost too much. Enter the storage cloud. With the storage cloud we could pay just for the data bandwidth and storage to support our data protection needs and no more. Old data versions could be replicated as new versions are created. Protecting data written to a new version is more problematic but some sort of write splitter (ala CDP) could be used to create a replica of this data as well.

Policy driven

Having a policy driven data protection system that only stores a minimal number of copies of data seems to be difficult to support. Yet, this seems to be what incremental-only backup software and archive products support today. For other backup software, if one uses a deduplicating VTL this can be very similar. Adding some policy sophistication to coordinate multiple data protection copies across multiple (potentially Cloud) nodes and deduplicating all the un-necessary copies seems entirely feasible.

Operationally transparent

Not interrupting ongoing operations also seems to be tough to crack. Yet, many storage vendors provide snapshot technologies that copy block and/or file data without interrupting operations. However, coordinating vendor snapshot technologies from some central data protection manager is an essential integration but continues to be lacking.

Can pieceparts solve the problem?

Yes, most of these features are purchasable as separate product offerings (except data versioning) but what’s missing is any one product that pulls all of this together and offers one integrated solution to data protection as I have described it.

The problem, of course, is that such functionality probably best belongs as part of the O/S or a hypervisor but they long ago relinquished any responsibility for data protection. Aside from the anti-trust and non-competitive nature of such a future data protection O/S offering, I only see isolated steps and no coordinated attack on today’s overall data protection problem.

Backup software vendors do a great job with what they have under their control, but they can’t do it all, ditto for VTL providers, CDP vendors, replication products, etc. Piecemeal solutions can only take us so far down this path but it’s all we have today and I fear for the forseeable future.

Dream time over for now, gotta backup some data…

IO Virtualization comes out

Snakes in a plane by richardmasoner [from flickr (cc)]
Snakes in a plane by richardmasoner (from flickr (cc))
Prior to last week’s VMworld, I had never heard of IO virtualization products before – storage virtualization yes but never IO virtualization. Then at last week’s VMworld I met with two vendors of IO virtualization products Aprius and Virtensys.

IO virtualization shares the HBAs/CNAs/NICs that a server tower would normally have plugged into each server and creates a top-of-rack box that shares these IO cards. The top-of-rack IO is connected to each of the tower servers by extending each server’s PCI-express bus.

Each individual server believes it has a local HBA/CNA/NIC card and acts accordingly. The top-of-rack box handles the mapping of each server to a portion of the HBA/CNA/NIC cards being shared. This all seems to remind me of server virtualization, using software to share the server processor, memory and IO resources across multiple applications. But with one significant difference.

How IO virtualization works

Aprius depends on the new SRIOV (Single Root I/O virtualization [requires login]) standards. I am no PCI-express expert but what this seems to do is allow a HBA/CNA/NIC PCI-express card to be a shared resource among a number of virtual servers executing within a physical server. What Aprius has done is sort of a “P2V in reverse” and allows a number of physical servers to share the same PCI-express HBA/CNA/NIC card in the top-of-rack solution.

Virtensys says it’s solution does not depend on SRIOV standards to provide IO virtualization. As such, it’s not clear what’s different but the top-of-box solution could conceivably share the hardware via software magic.

From a FC and HBA perspective there seems to be a number of questions as to how all this works.

  • Does the top-of-box solution need to be powered and booted up first?
  • How is FC zoning and LUN masking supported in a shared environment?

Similar networking questions should arise especially when one considers iSCSI boot capabilities.

Economics of IO virtualization

But the real question is one of economics. My lab owner friends tell me that a CNA costs about $800/port these days. Now when you consider that one could have 4-8 servers sharing each of these ports with IO virtualization the economics become clearer. With a typical configuration of 6 servers

  • For a non-IO virtualized solution, each server would have 2 CNA ports at a minimum so this would cost you $1600/server or $9600.
  • For an IO virtualized solution, each server requires PCI-extenders, costing about $50/server or $300, at least one CNA (for the top-of-rack) costing $1600 and the cost of their top-of-rack box.

If the IO virtualization box cost less than $7.7K it would be economical. But, IO virtualization providers also claim another savings, i.e, less switch ports need to be purchased because there are less physical network links. Unclear to me what a 10Gbe port with FCOE support costs these days but my guess may be 2X what a CNA port costs or another $1600/port or for the 6 server dual ported configuration ~$19.2K. Thus, the top-of-rack solution could cost almost $27K and still be more economical. When using IO virtualization to reduce HBAs and NICs then the top-of-rack solution could be even more economical.

Although the economics may be in favor of IO virtualization – at the moment – time is running out. CNA, HBA and NIC ports are coming down in price as vendors ramp up production. These same factors will reduce switch port cost as well. Thus, the savings gained from sharing CNAs, HBAs and NICs across multiple servers will diminish over time. Also the move to FCOE will eliminate HBAs and NICs and replace them with just CNAs so there are even less ports to amortize.

Moreover, PCI-express extender cards will probably never achieve volumes similar to HBAs, NICs, or CNAs so extender card pricing should remain flat. In contrast, any top-of-rack solution will share in overall technology trends reducing server pricing so relative advantages of IO virtualization over top-of-rack switches should be a wash.

The critical question for the IO virtualization vendors is can they support a high enough fan-in (physical server to top-of-rack) to justify the additional costs in both capital and operational expense for their solution. And will they be able to keep ahead of the pricing trends of their competition (top-of-rack switch ports and server CNA ports).

On one side as CNAs, HBAs, and NICs become faster and more powerful, no single application can consume all the throughput being made available. But on the other hand, server virtualization are now running more applications on each physical server and as such, amortizing port hardware over more and more applications.

Does IO virtualization make sense today at HBAs@8GFC, NICs and CNAs@10Gbe, would it make sense in the future with converged networks? It all depends on port costs. As port costs go down eventually these products will be squeezed.

The significant difference between server and IO virtualization is the fact that IO virtualization doesn’t reduce hardware footprint – one top-of-box IO virtualization appliance replaces a top-of-box switch and server PCI-express slots used by CNAs/HBAs/NICs are now used by PCI-extender cards. In contrast, server virtualization reduced hardware footprint and costs from the start. The fact that IO virtualization doesn’t reduce hardware footprint may doom this product.

What’s happening with MRAM?

16Mb MRAM chips from Everspin
16Mb MRAM chips from Everspin

At the recent Flash Memory Summit there were a few announcements that show continued development of MRAM technology which can substitute for NAND or DRAM, has unlimited write cycles and is magnetism based. My interest in MRAM stems from its potential use as a substitute storage technology for today’s SSDs that use SLC and MLC NAND flash memory with much more limited write cycles.

MRAM has the potential to replace NAND SSD technology because of the speed of write (current prototypes write at 400Mhz or a few nanoseconds) and with the potential to go up to 1Ghz. At 400Mhz, MRAM is already much much faster than today’s NAND. And with no write limits, MRAM technology should be very appealing to most SSD vendors.

The problem with MRAM

The only problem is that current MRAM chips use 150nm chip design technology whereas today’s NAND ICs use 32nm chip design technology. All this means that current MRAM chips hold about 1/1000th the memory capacity of today’s NAND chips (16Mb MRAM from Everspin vs 16Gb NAND from multiple vendors). MRAM has to get on the same (chip) design node as NAND to make a significant play for storage intensive applications.

It’s encouraging that somebody at least is starting to manufacture MRAM chips rather than just being lab prototypes with this technology. From my perspective, it can only get better from here…

What's holding back the cloud?

Cloud whisps by turtlemom4bacon
Cloud whisps by turtlemom4bacon

Steve Duplessie’s recent post on how the lack of scarcity will be a gamechanger got me thinking. Free is good but the simplicity of the user/administrative interface is worth paying for. And it’s that simplicity that pays off for me.

Ease of use

I agree wholeheartedly with Steve about what and where people should spend their time today. Tweetdeck, the Mac, and the iPhone are three key examples that make my business life easier (most of the time).

  • TweetDeck allows me to filter who I am following all while giving me access to any and all of them.
  • The Mac leaves me much more time to do what needs to be done and allows me to spend less time on non-essential stuff.
  • The iPhone has 1000’s of app’s which make my idle time that much more productive.

Nobody would say any of these things are easy to create and for most of them (Tweetdeck is free at the moment) I pay a premium for these products. All these products have significant complexity to offer the simple user and administrative interface they supply.

The iPhone is probably closest to the cloud from my perspective. But it performs poorly (compared to broadband) and service (ATT?) is spotty.  Now these are nuisances in a cell phone which can be lived with.  If this were my only work platform they would be deadly.

Now the cloud may be easy to use because it removes the administrative burden but that’s only one facet of using it. I assume using most cloud services are as easy as signing up on the web and then recoding applications to use the cloud provider’s designated API. This doesn’t sound easy to me. (Full disclosure I am not a current cloud user and thus, cannot talk about it’s ease of use).

Storm clouds

However, today the cloud is not there for other reasons – availability concerns, security concerns, performance issues, etc. All these are inhibitors today and need to be resolved before the cloud can reach the mainstream or maybe be my platform of choice. Also, I have talked before on some other issues with the cloud.

Aside from those inhibitors, the other main problems with the cloud are lack of applications I need to do business today.  Google Apps and MS Office over the net are interesting but not sufficient.  Not sure what is sufficient and that would depend on your line of business but server and desktop platforms had the same problem when they started out. However servers and desktops have evolved over time from killer apps to providing needed application support. The cloud will no doubt follow, over time.

In the end, the cloud needs to both grow up and evolve to host my business model and I would presume many others as well. Personally I don’t care if my data&apps are hosted on the cloud or hosted on office machines. What matters to me are security, reliability, availability, and useability. When the cloud can support me in the same way that the Mac can, then who hosts my applications will be a purely economic decision.

The cloud and net are just not there yet.