Analyzing SPECsfs2008 flash use in NFS performance – chart-of-the-month

(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved
(SCISFS120316-002) (c) 2012 Silverton Consulting, All Rights Reserved

For some time now I have been using OPS/drive to measure storage system disk drive efficiency but have so far failed to come up with anything similar for flash or SSD use.  The problem with flash in storage is that it can be used as a cache or as a storage device.  Even when used as a storage device under automated storage tiering, SSD advantages can be difficult to pin down.

In my March newsletter as a first attempt to measure storage system flash efficiency I supplied a new chart shown above, which plots the top 10 NFS throughput ops/second/GB of NAND used in the SPECsfs2008 results.

What’s with Avere?

Something different has occurred with the (#1) Avere FXT 3500 44-node system in the chart.   The 44-node Avere system only used ~800GB of flash as a ZIL (ZFS intent log from the SPECsfs report).   However, the 44-node system also had ~7TB of DRAM across their 44-node system, most of which was used for file IO caching.  If we incorporated storage system memory size with flash GB in the above chart it would have dropped the Avere numbers by a factor of 9 while only dropping the others by a factor of ~2X which would still give the Avere a significant advantage but not quite so stunning.  Also, the Avere system frontends other NAS systems, (this one running ZFS) so it’s not quite the same as being a direct NAS storage system like the others on this chart.

The remainder of the chart (#2-10) belongs to NetApp and their FlashCache (or PAM) cards.  Even Oracles Sun ZFS Storage 7320 appliance did not come close to either the Avere FXT 3500 system or the NetApp storage on this chart.  But there were at least 10 other SPECsfs2008 NFS results using some form of flash but were not fast enough to rank on this chart.

Other measures of flash effectiveness

This metric still doesn’t quite capture flash efficiency.  I was discussing flash performance with another startup the other day and they suggested that SSD drive count might be a better  alternative.  With such a measure, it would take into consideration that each SSD has a only a certain performance level it can sustain, not unlike disk drives.

In that case Avere’s 44-node system had 4 drives, and each NetApp system had two FlashCache cards, representing 2-SSDs per NetApp node.  I try that next time to see if it’s  a better fit.

~~~~

The complete SPECsfs2008 performance report went out in SCI’s March newsletter.  But a copy of the report will be posted on our dispatches page sometime next month (if all goes well). However, you can get the SPECsfs performance analysis now and subscribe to future free newsletters by just sending us an email or using the signup form above right.

For a more extensive discussion of current NAS or file system storage performance covering SPECsfs2008 (Top 20) results and our new ChampionChart™ for NFS and CIFS storage systems, please see SCI’s NAS Buying Guide available from our website.

As always, we welcome any suggestions or comments on how to improve our analysis of SPECsfs2008 results or any of our other storage performance analyses.


Latest SPECsfs2008 results, over 1 million NFS ops/sec – chart-of-the-month

Column chart showing the top 10 NFS througput operations per second for SPECsfs2008
(SCISFS111221-001) (c) 2011 Silverton Consulting, All Rights Reserved

[We are still catching up on our charts for the past quarter but this one brings us up to date through last month]

There’s just something about a million SPECsfs2008(r) NFS throughput operations per second that kind of excites me (weird, I know).  Yes it takes over 44-nodes of Avere FXT 3500 with over 6TB of DRAM cache, 140-nodes of EMC Isilon S200 with almost 7TB of DRAM cache and 25TB of SSDs or at least 16-nodes of NetApp FAS6240 in Data ONTAP 8.1 cluster mode with 8TB of FlashCache to get to that level.

Nevertheless, a million NFS throughput operations is something worth celebrating.  It’s not often one achieves a 2X improvement in performance over a previous record.  Something significant has changed here.

The age of scale-out

We have reached a point where scaling systems out can provide linear performance improvements, at least up to a point.  For example, the EMC Isilon and NetApp FAS6240 had a close to linear speed up in performance as they added nodes indicating (to me at least) there may be more there if they just throw more storage nodes at the problem.  Although maybe they saw some drop off and didn’t wish to show the world or potentially the costs became prohibitive and they had to stop someplace.   On the other hand, Avere only benchmarked their 44-node system with their current hardware (FXT 3500), they must have figured winning the crown was enough.

However, I would like to point out that throwing just any hardware at these systems doesn’t necessary increase performance.  Previously (see my CIFS vs NFS corrected post), we had shown the linear regression for NFS throughput against spindle count and although the regression coefficient was good (~R**2 of 0.82), it wasn’t perfect. And of course we eliminated any SSDs from that prior analysis. (Probably should consider eliminating any system with more than a TB of DRAM as well – but this was before the 44-node Avere result was out).

Speaking of disk drives, the FAS6240 system nodes had 72-450GB 15Krpm disks, the Isilon nodes had 24-300GB 10Krpm disks and each Avere node had 15-600GB 7.2Krpm SAS disks.  However the Avere system also had a 4-Solaris ZFS file storage systems behind it each of which had another 22-3TB (7.2Krpm, I think) disks.  Given all that, the 16-node NetApp system, 140-node Isilon and the 44-node Avere systems had a total of 1152, 3360 and 748 disk drives respectively.   Of course, this doesn’t count the system disks for the Isilon and Avere systems nor any of the SSDs or FlashCache in the various configurations.

I would say with this round of SPECsfs2008 benchmarks scale-out NAS systems have come out.  It’s too bad that both NetApp and Avere didn’t release comparable CIFS benchmark results which would have helped in my perennial discussion on CIFS vs. NFS.

But there’s always next time.

~~~~

The full SPECsfs2008 performance report went out to our newsletter subscriber’s last December.  A copy of the full report will be up on the dispatches page of our site sometime later this month (if all goes well). However, you can see our full SPECsfs2008 performance analysis now and subscribe to our free monthly newsletter to receive future reports directly by just sending us an email or using the signup form above right.

For a more extensive discussion of file and NAS storage performance covering top 30 SPECsfs2008 results and NAS storage system features and functionality, please consider purchasing our NAS Buying Guide available from SCI’s website.

As always, we welcome any suggestions on how to improve our analysis of SPECsfs2008 results or any of our other storage system performance discussions.

Comments?

Will Hybrid drives conquer enterprise storage?

Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)
Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)

I saw where Seagate announced the next generation of their Momentus XT Hybrid (SSD & Disk) drive this week.  We haven’t discussed Hybrid drives much on this blog but it has become a viable product family.

I am not planning on describing the new drive specs here as there was an excellent review by Greg Schulz at StorageIOblog.

However, the question some in the storage industry have had is can Hybrid drives supplant data center storage.  I believe the answer to that is no and I will tell you why.

Hybrid drive secrets

The secret to Seagate’s Hybrid drive lies in its FAST technology.  It provides a sort of automated disk caching that moves frequently accessed OS or boot data to NAND/SSD providing quicker access times.

Storage subsystem caching logic has been around in storage subsystems for decade’s now, ever since the IBM 3880 Mod 11&13 storage control systems came out last century.  However, these algorithms have gotten much more sophisticated over time and today can make a significant difference in storage system performance.  This can be easily witnessed by the wide variance in storage system performance on a per disk drive basis (e.g., see my post on Latest SPC-2 results – chart of the month).

Enterprise storage use of Hybrid drives?

The problem with using Hybrid drives in enterprise storage is that caching algorithms are based on some predictability of access/reference patterns.  When you have a Hybrid drive directly connected to a server or a PC it can view a significant portion of server IO (at least to the boot/OS volume) but more importantly, that boot/OS data is statically allocated, i.e., doesn’t move around all that much.   This means that one PC session looks pretty much like the next PC session and as such, the hybrid drive can learn an awful lot about the next IO session just by remembering the last one.

However, enterprise storage IO changes significantly from one storage session (day?) to another.  Not only are the end-user generated database transactions moving around the data, but the data itself is much more dynamically allocated, i.e., moves around a lot.

Backend data movement is especially true for automated storage tiering used in subsystems that contain both SSDs and disk drives. But it’s also true in systems that map data placement using log structured file systems.  NetApp Write Anywhere File Layout (WAFL) being a prominent user of this approach but other storage systems do this as well.

In addition, any fixed, permanent mapping of a user data block to a physical disk location is becoming less useful over time as advanced storage features make dynamic or virtualized mapping a necessity.  Just consider snapshots based on copy-on-write technology, all it takes is a write to have a snapshot block be moved to a different location.

Nonetheless, the main problem is that all the smarts about what is happening to data on backend storage primarily lies at the controller level not at the drive level.  This not only applies to data mapping but also end-user/application data access, as cache hits are never even seen by a drive.  As such, Hybrid drives alone don’t make much sense in enterprise storage.

Maybe, if they were intricately tied to the subsystem

I guess one way this could all work better is if the Hybrid drive caching logic were somehow controlled by the storage subsystem.  In this way, the controller could provide hints as to which disk blocks to move into NAND.  Perhaps this is a way to distribute storage tiering activity to the backend devices, without the subsystem having to do any of the heavy lifting, i.e., the hybrid drives would do all the data movement under the guidance of the controller.

I don’t think this likely because it would take industry standardization to define any new “hint” commands and they would be specific to Hybrid drives.  Barring standards, it’s an interface between one storage vendor and one drive vendor.  Probably ok if you made both storage subsystem and hybrid drives but there aren’t any vendor’s left that does both drives and the storage controllers.

~~~~

So, given the state of enterprise storage today and its continuing proclivity to move data around accross its backend storage,  I believe Hybrid drives won’t be used in enterprise storage anytime soon.

Comments?

 

SCI’s latest SPC-1&-1/E LRT results – chart of the month

(c) 2010 Silverton Consulting, Inc., All Rights Reserved
(c) 2010 Silverton Consulting, Inc., All Rights Reserved

It’s been a while since we reported on Storage Performance Council (SPC) Least Response Time (LRT) results (see Chart of the month: SPC LRT[TM]).  This is one of the charts we produce for our monthly dispatch on storage performance (quarterly report on SPC results).

Since our last blog post on this subject there have been 6 new entries in LRT Top 10 (#3-6 &, 9-10).  As can be seen here which combines SPC-1 and 1/E results, response times vary considerably.  7 of these top 10 LRT results come from subsystems which either have all SSDs (#1-4, 7 & 9) or have a large NAND cache (#5).    The newest members on this chart were the NetApp 3270A and the Xiotech Emprise 5000-300GB disk drives which were published recently.

The NetApp FAS3270A, a mid-range subsystem with 1TB of NAND cache (512MB in each controller) seemed to do pretty well here with all SSD systems doing better than it and a pair of all SSD systems doing worse than it.  Coming in under 1msec LRT is no small feat.  We are certain the NAND cache helped NetApp achieve their superior responsiveness.

What the Xiotech Emprise 5000-300GB storage subsystem is doing here is another question.  They have always done well on an IOPs/drive basis (see SPC-1&-1/E results IOPs/Drive – chart of the month) but being top ten in LRT had not been their forte, previously.  How one coaxes a 1.47 msec LRT out of a 20 drive system that costs only ~$41K, 12X lower than the median price(~$509K) of the other subsystems here is a mystery.  Of course, they were using RAID 1 but so were half of the subsystems on this chart.

It’s nice that some turnover in this top 10 LRT.  I still contend that response time is an important performance metric for many storage workloads (see my IO throughput vs. response time and why it matters post) and improvement over time validates my thesis.  Also I received many comments discussing the merits of database latencies for ESRP v3 (Exchange 2010) results, (see my Microsoft Exchange Perfomance ESRP v3.0 results – chart of the month post).  You can judge the results of that lengthy discussion for yourselves.

The full performance dispatch will be up on our website in a couple of weeks but if you are interested in seeing it sooner just sign up for our free monthly newsletter (see upper right) or subscribe by email and we will send you the current issue with download instructions for this and other reports.

As always, we welcome any constructive suggestions on how to improve our storage performance analysis.

Comments?

Data compression lives on

Macroblocking: demolish the eerie ▼oid by Rosa Menkman (cc) (from Flickr)
Macroblocking: demolish the eerie ▼oid by Rosa Menkman (cc) (from Flickr)

Last week NetApp announced the availability of data compression on many of their unified storage platforms, which includes block and file storage.  Earlier this year EMC announced data compression for LUNs on CLARiion and Celerra.  I must commend both of them for re-integrating data compression back into primary storage systems, missing since IBM and Sun stopped marketing RVA and SVA.

Data compression algorithms

Essentially data compression is an algorithm that eliminates redundancy in data streams.  Data compression can be “lossy” or “loss-less”.  Data compression in storage subsystems is typically loss-less which means that the original data can be reconstructed without any loss of information.  One sees lossy algorithms in video/audio data compression which doesn’t impact video/audio fidelity unless it results in significant loss of data.

One simple example of loss-less data compression is Run-Length Encoding which substitutes a trigger, count, and character string for any character repeated more than 4 times in a block of data.  This compresses well any text strings with lots of blanks, numerical data with lot’s of 0’s and initial format data written with a repeating character.

There are other, more sophisticated compression algorithms like Huffman Coding, which identify the most frequent bytes in a block of data and replace these bytes with shorter bit patterns. For example if ~50% of the characters in a text file are the letters “a”, “e”, “i”, “o”, “t”, and “n” (see Wikipedia, Frequency Analysis) then these characters can take up much less space if we encode them 4 or less bits rather than the 8-bits in a byte.

I am certain that both EMC and NetApp are using much more sophisticated algorithms than either of these and it wouldn’t surprise me to know they are using something like the open source algorithm like Zlib (gzip) or Bzip2 (see my Poor deduplication with Oracle RMAN compressed backups post for an explanation) which uses Huffman Coding and adds even more sophistication.  Data compression algorithms like these could offer something like 50% compression, i.e., your data could be stored in 50% less space.

Data compression is often confused with Data Deduplication but it’s not the same. Deduplication looks for duplicate data across different data blocks and files while data compression is strictly only examining the data stream within a block or file and doesn’t depend on any other data.

Storage system data compression

In the past, data compression was relegated to a separate appliance, tape storage systems, and/or host software.  By integrating these algorithms into their main storage engines, both NetApp and EMC are taking advantage of the recent processor speed increases being embedding into their systems to offer offline functionality for online data.

Historically, compression algorithms such as these were implemented in hardware but nowadays they can easily be done in software by being relegated to operate during off-peak IO time or execute as the lowest priority task in the storage system. As such, there can be no guarantee when your data will finally be compressed but it will be compressed eventually.

Data compression like this is great for data that isn’t modified frequently.  It takes some processing time to compress data and as such, would need to be repeated after every modification of a compressed block or file.  So if the data isn’t modified that much, compression’s processing cost could be amortized over longer data lifetimes.

Further, data compression must be undone at read time, i.e., the data needs to be de-compressed and handed off to the IO requesting it.  De-compression is a much faster algorithm than compression because in the case of something like Huffman Coding the character dictionary is already known and as such, it’s just a matter of table lookup and bit field isolation. It would be convenient if this data were sitting in the system DRAM someplace but lacking that, moving it from cache to DRAM could be done quickly enough, processed there, and then moved back before final transfer to the requesting IO.

As such, data compression may impact response time for compressed data reads.  How much of an impact is yet TBD.

Data writes will not be impacted at all because the compression activity is done much later. Whether the data stays in cache until compressed or is brought back in at some later time is another algorithm question which may impact cache hit rates/compression performance but this doesn’t have to be a serious impediment.

NetApp is able to offer this capability for both block and file storage because of it’s WAFL backend data structure which essentially allows it to create variable length blocks for file and block data.  EMC only offers this for LUN data (block storage) as of yet but it’s probably just a matter of time before it’s available for other data as well.

Any questions?

Primary storage compression can work

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

Since IBM’s announced their intent to purchase StorWize there has been much discussion on whether primary storage data compression can be made to work.  As far as I know StorWize only offered primary storage compression for file data but there is nothing that prohibits doing something similar for block storage as long as you have some control over how blocks are laid down on disk.

Although secondary block  data compression has been around for years in enterprise tape and more recently with some deduplication appliances, primary storage compression pre-dates secondary storage compression.  STK delivered primary storage data compression with Iceberg in the early 90’s but it wasn’t until a couple of years later that they introduced compression on tape.

In both primary and secondary storage, data compression works to reduce the space needed to store data.  Of course, not all data compresses well, most notably image data (as it’s already compressed) but compression ratios of 2:1 were common for primary storage of that time and are normal for today’s secondary storage.  I see no reason why such ratios couldn’t be achieved for current primary storage block data.

Implementing primary block storage data compression

There is significant interest in implementing deduplication for primary storage as NetApp has done but supporting data compression is not much harder.  I believe much of the effort to deduplicate primary storage lies in creating a method to address partial blocks out of order, which I would call data block virtual addressing which requires some sort of storage pool.  The remaining effort to deduplicate data involves implementing the chosen (dedupe) algorithm, indexing/hashing, and other administrative activities.  These later activities aren’t readily transferable to data compression but the virtual addressing and space pooling should be usable by data compression.

Furthermore, block storage thin provisioning requires some sort of virtual addressing as does automated storage tiering.  So in my view, once you have implemented some of these advanced capabilities, implementing data compression is not that big a deal.

The one question that remains is does one implement compression with hardware or software (see Better storage through hardware for more). Considering that most deduplication is done via software today it seems that data compression in software should be doable.  The compression phase could run in the background sometime after the data has been stored.  Real time decompression using software might take some work, but would cost considerably less than any hardware solution.  Although the intensive bit fiddling required to perform data compression/decompression may argue for some sort of hardware assist.

Data compression complements deduplication

The problem with deduplication is that it needs duplicate data.  This is why it works so well for secondary storage (backing up the same data over and over) and for VDI/VMware primary storage (with duplicated O/S data).

But data compression is an orthogonal or complementary technique which uses the inherent redundancy in information to reduce storage requirements.  For instance, something like LZ compression takes advantage of the fact that in text some letters occur more often than others (see letter frequency). For instance, in English, ‘e’, ‘t’, ‘a’, ‘o’, ‘i’, and ‘n, represent over 50% of the characters in most text documents.  By using shorter bit combinations to encode these letters one can reduce the bit-length of any (English) text string substantially.  Another example is run length encoding which takes any repeated character and substitutes a trigger character, the character itself, and a count of the number of repetitions for the repeated string.

Moreover, the nice thing about data compression is that all these techniques can be readily combined to generate even better compression rates.  And of course compression could be applied after deduplication to reduce storage footprint even more.

Why would any vendor compress data?

For a couple of reasons:

  • Compression not only reduces storage footprint but with hardware assist it can also increase storage throughput. For example, if 10GB of data compresses down to 5GB, it should take ~1/2 the time to read.
  • Compression reduces the time it would take time to clone, mirror or replicate.
  • Compression increases the amount of data that could be stored which should incentivise them to pay more for your storage.

In contrast, with data compression vendors might may sell less storage.  But the advantages of enterprise storage is in the advanced functionality/features and higher reliability/availability/performance that are available.  I see data compression as just another advantages to enterprise class storage and as a feature, the user could enable or disable it and see how well it works for there data.

What do you think?

SPECsfs2008 CIFS ORT performance – chart of the month

(c) 2010 Silverton Consulting Inc., All Rights Reserved
(c) 2010 Silverton Consulting Inc., All Rights Reserved

The above chart on SPECsfs(R) 2008 results was discussed in our latest performance dispatch that went out to SCI’s newsletter subscribers last month.  We have described Apple’s great CIFS ORT performance in previous newsletters but here I would like to talk about NetApp’s CIFS ORT results.

NetApp had three new CIFS submissions published this past quarter, all using the FAS3140 system but with varying drive counts/types and Flash Cache installed.  Recall that Flash Cache used to be known as PAM-II and is an onboard system card which holds 256GB of NAND memory used as an extension of system cache.  This differs substantially from using NAND in a SSD as a separate tier of storage as many other vendors currently do.  The newly benchmarked NetApp systems included:

  • FAS3140 (FCAL disks with Flash Cache) – used 56-15Krpm FC disk drives with 512GB of Flash Cache (2-cards)
  • FAS3140 (SATA disks with Flash Cache) – used 96-7.2Krpm SATA disk drives with 512GB of Flash Cache
  • FAS3140 (FCAL disks) – used 242-15Krpm FC disk drives and had no Flash Cache whatsoever

If I had to guess the point of this exercise was to show that one can offset fast performing hard disk drives by using FlashCache and significantly less (~1/4) fast disk drives or by using Flash Cache and somewhat more SATA drives.  In another chart from our newsletter one could see that all three systems resulted in very similar CIFS throughput results (CIFS Ops/Sec.), but in CIFS ORT (see above), the differences between the 3 systems are much more pronounced.

Why does Flash help CIFS ORT?

As one can see, the best CIFS ORT performance of the three came from the FAS3140 with FCAL disks and Flash Cache which managed a response time of ~1.25 msec.  The next best performer was the FAS3140 with SATA disks and Flash Cache with a CIFS ORT of just under ~1.48 msec.  The relatively worst performer of the bunch was the FAS3140 with only FCAL disks which came in at ~1.84 msec. CIFS ORT.  So why the different ORT performance?

Mostly the better performance is due to the increased cache available in the Flash Cache systems.  If one were to look at the SPECsfs 2008 workload one would find that less than 30% is read and write data activity and the rest is what one might call meta-data requests (query path info @21.5%, query file info @12.9%, create = close @9.7%, etc.).  While read data may not be very cache friendly, most of the meta-data and all the write activity are cache friendly.  Meta-data activity is more cache friendly primarily because it’s relatively small in size and any write data goes to cache before being destaged to disk.  As such, this more cache friendly workload generates on average, better response times when one has larger amounts of cache.

For proof one need look no further than the relative ORT performance of the FAS3140 with SATA and Flash vs. the FAS3140 with just FCAL disks.  The Flash Cache/SATA drive system had ~25% better ORT results than the FCAL only system even with significantly slower and much fewer disk drives.

The full SPECsfs 2008 performance report will go up on SCI’s website later this month in our dispatches directory.  However, if you are interested in receiving this report now and future copies when published, just subscribe by email to our free newsletter and we will email the report to you now.

PC-as-a-Service (PCaaS) using VDI

IBM PC Computer by Mess of Pottage (cc) (from Flickr)
IBM PC Computer by Mess of Pottage (cc) (from Flickr)

Last year at VMworld, VMware was saying that 2010 was year for VDI (virtual desktop infrastructure), last week NetApp said that most large NY banks they talked with were looking at implementing VDI and prior to that, HP StorageWorks announced a new VDI reference platform that could support ~1600 VDI images.  It seems that VDI is gaining some serious interest.

While VDI works well for large organizations, there doesn’t seem to be any similar solution for consumers. The typical consumer today usually runs downlevel OS’s, anti-virus, office applications, etc.  and have no time, nor inclination to update such software.  These consumers would be considerably better served with something like PCaaS, if such a thing existed.

PCaaS

Essentially PCaaS would be a VDI-like service offering, using standard VDI tools or something similar with a lightweight kernel, use of local attached resources (printers, usb sticks, scanners, etc.) but running applications that were hosted elsewhere.  PCaaS could provide all the latest O/S and applications and provide enterprise class reliability, support and backup/restore services.

Broadband

One potential problem with PCaaS is the need for reliable broadband to the home. Just like other cloud services, without broadband, none of this will work.

Possibly this could be circumvented if a PCaaS viewer browser application were available (like VMware’s Viewer). With this in place, PCaaS could be supplied over any internet enabled location supporting browser access.   Such a browser based service may not support the same rich menu of local resources as a normal PCaaS client, but it would probably suffice when needed. The other nice thing about a viewer is that smart phones, iPads and other always-on web-enabled devices supporting standard browsers could provide PCaaS services from anywhere mobile data or WI-FI were available.

PCaaS business model

As for a businesses that could bring PC-as-a-Service to life, I see many potential providers:

  • Any current PC hardware vendor/supplier may want to supply PCaaS as it may defer/reduce hardware purchases or rather move such activity from the consumer to companies.
  • Many SMB hosting providers could easily offer such a service.
  • Many local IT support services could deliver better and potentially less expensive services to their customers by offering PCaaS.
  • Any web hosting company would have the networking, server infrastructure and technical know-how to easily provide PCaaS.

This list ignores any new entrants that would see this as a significant opportunity.

Google, Microsoft and others seem to be taking small steps to do this in a piecemeal fashion, with cloud enabled office/email applications. However, in my view what the consumer really wants is a complete PC, not just some select group of office applications.

As described above, PCaaS would bring enterprise level IT desktop services to the consumer marketplace. Any substantive business in PCaaS would free up untold numbers of technically astute individuals providing un-paid, on-call support to millions, perhaps billions of technically challenged consumers.

Now if someone would just come out with Mac-as-a-Service, I could retire from supporting my family’s Apple desktops & laptops…