Whatever happened to holographic storage?

InPhase Technologies Drive & Media (c) 2010 InPhase Technologies, All Rights Reserved (From their website)
InPhase Technologies Drive & Media (c) 2010 InPhase Technologies, All Rights Reserved (From their website)

Although InPhase Technologies and a few other startups had taken a shot at holographic storage over time, there has not been any recent innovation here that I can see.

Ecosystems matter

The real problem (which InPhase was trying to address) is to build up an ecosystem around their technology.  In magnetic disk storage, you have media companies, head companies, and interface companies; in optical disk (Blu-Ray, DVDs, CDs) you have drive vendors, media vendors, and laser electronic providers; in magnetic tape, you have drive vendors, tape head vendors, and tape media vendors, etc.  All of these corporate ecosystems are driving their respective technologies with joint and separate R&D funding, as fast as they can and gaining economies of scale from specialization.

Any holographic storage or any new storage technology for that matter would have to enter into the data storage market with a competitive product but the real trick is maintaining that competitiveness over time. That’s where an ecosystem and all their specialized R&D funding can help.

Market equavalence is fine, but technology trend parity is key

So let’s say holographic storage enters the market with a 260GB disk platter to compete against something like Blu-ray. Well today Blu-ray technology supports 26GB of data storage in single layer media, costing about $5 each and a drive costs about ~$60-$190.   So to match todays Blu-ray capabilities holographic media would need to cost ~$50 and the holographic drive about ~$600-$1900.  But that’s just today, dual layer Blu-Ray is available coming on line soon and in the labs, a 16-layer Blu-ray recording was demonstrated in 2008.  To keep up with Blu-ray, holographic storage would need to demonstrate in their lab more than 4TB of data on a platter and be able to maintain similar cost multipliers for their media and drives.  Hard to do with limited R&D funding.

As such, I believe it’s not enough to achieve parity to other technologies currently available, any new storage technology really has to be at least (in my estimation) 10x better in costs and performance right at the start in order to gain some sort of foothold that can be sustained.  To do this against Blu-ray, optical holographic would need to start at 260GB platter for $5 with a drive at $60-$190 – just not there yet.

But NAND Flash/SSDs did it!

Yes, but the secret with NAND/SSDs was that they emerged from e-prom’s a small but lucrative market and later their technology was used in consumer products as a lower cost alternative/lower power/more rugged solution to extremely small form factor disk devices that were just starting to come online.  We don’t hear about extremely small factor disk drives anymore because NAND flash won out.  Once NAND flash held the market there, consumer product volumes were able to drive costs down and entice the creation of a valuable multi-company/multi-continent ecosystem.  From there, it was only a matter of time before NAND technologies became dense and cheap enough to be used in SSDs addressing the more interesting and potential more lucrative enterprise data storage domain.

So how can optical holographic storage do it?

Maybe the real problem for holographic storage was its aim at the enterprise data storage market, perhaps if they could have gone after some specialized or consumer market and carved out a niche, they could have created an ecosystem.  Media and Entertainment has some pretty serious data storage requirements which might be a good match.  InPhase was making some inroads there but couldn’t seem to put it altogether.

So what’s left for holographic technology to go after – perhaps medical imaging.  It would play to holographic’s storage strengths (ability to densely record multiple photographs). It’s very niche-like with a few medical instrument players developing MRI, cat scans and other imaging technology that all require lot’s of data storage and long-term retention is a definite plus.  Perhaps, if holographic technology could collaborate with a medical instrument consortium to establish a beachhead and develop some sort of multi-company ecosystem, it could move out from there.  Of course, magnetic disk and tape are also going after this market,  so this isn’t a certainty but there may be others markets like this out there, e.g., check imaging, satellite imagery, etc.  Something specialized like this could be just the place to hunker down, build an ecosystem and in 5-7 years, emerge to attack general data storage again.


SOHO backup options

© 2010 RDX Storage Alliance. All Rights Reserved. (From their website)
© 2010 RDX Storage Alliance. All Rights Reserved. (From their website)

I must admit, even though I have disparaged DVD archive life (see CDs and DVDs longevity questioned) I still backup my work desktops/family computers to DVD and DVDdl disks.  It’s cheap (on sale 100 DVDs cost about $30 and DVDdl ~2.5 that much) and it’s convenient (no need for additional software, outside storage fees, or additional drives).  For offsite backups I take the monthly backups and store them in a safety deposit box.

But my partner (and wife) said “Your time is worth something, every time you have to swap DVDs you could be doing something else.” (… like helping around the house.)

She followed up by saying “Couldn’t you use something that was start it and forget it til it was done.”

Well this got me to thinking (as well as having multiple media errors in my latest DVDdl full backup), there’s got to be a better way.

The options for SOHO (small office/home office) Offsite backups look to be as follows: (from sexiest to least sexy)

  • Cloud storage for backup – Mozy, Norton BackupGladinetNasuni, and no doubt many others can provide secure, cloud based backup of desktop, laptop data for Macs and Window systems.  Some of these would require a separate VM or server to connect to the cloud while others would not.  Using the cloud might require the office systems to be left on at nite but that would be a small price to pay to backup your data offsite.   Benefits to cloud storage approaches are that it would get the backups offsite, could be automatically scheduled/scripted to take place off-hours and would require no (or minimal) user intervention to perform.  Disadvantages to this approach is that the office systems would need to be left powered on, backup data is out of your control and bandwidth and storage fees would need to be paid.
  • RDX devices – these are removable NFS accessed disk storage which can support from 40GB to 640GB per cartridge. The devices claim 30yr archive life, which should be fine for SOHO purposes.  Cost of cartridges is probably RDX greatest issue BUT, unlike DVDs you can reuse RDX media if you want to.   Benefits are that RDX would require minimal operator intervention for anything less than 640GB of backup data, backups would be faster (45MB/s), and the data would be under your control.  Disadvantages are the cost of the media (640GB Imation RDX cartridge ~$310) and drives (?), data would not be encrypted unless encrypted at the host, and you would need to move the cartridge data offsite.
  • LTO tape – To my knowledge there is only one vendor out there that makes an iSCSI LTO tape and that is my friends at Spectra Logic but they also make a SAS (6Gb/s) attached LTO-5 tape drive.  It’s unclear which level of LTO technology is supported with the iSCSI drive but even one or two generations down would work for many SOHO shops.  Benefits of LTO tape are minimal operator intervention, long archive life, enterprise class backup technology, faster backups and drive data encryption.  Disadvantages are the cost of the media ($27-$30 for LTO-4 cartridges), drive costs(?), interface costs (if any) and the need to move the cartridges offsite.  I like the iSCSI drive because all one would need is a iSCSI initiator software which can be had easily enough for most desktop systems.
  • DAT tape – I thought these were dead but my good friend John Obeto informed me they are alive and well.  DAT drives support USB 2.0, SAS or parallel SCSI interfaces. Although it’s unclear whether they have drivers for Mac OS/X, Windows shops could probably use them without problem. Benefits are similar to LTO tape above but not as fast and not as long a archive life.  Disadvantages are cartridge cost (320GB DAT cartridge ~$37), drive costs (?) and one would have to move the media offsite.
  • (Blu-ray, Blu-ray dl), DVD, or DVDdl – These are ok but their archive life is miserable (under 2yrs for DVDs at best, see post link above). Benefits are they’res very cheap to use, lowest cost removable media (100GB of data would take ~22 DVDs or 12 DVDdls which at $0.30/ DVD or $0.75 for DVDdl thats  ~$6.60 to $9 per backup), and lowest cost drive (comes optional on most desktops today). Disadvantages are high operator intervention (to swap out disks), more complexity to keep track of each DVDs portion of the backup, more complex media storage (you have a lot more of it), it takes forever (burning 7.7GB to a DVDdl takes around an hour or ~2.1MB/sec.), data encryption would need to be done at the host, and one has to take the media offsite.  I don’t have similar performance data for using Blu-ray  for backups other than Blu-ray dl media costs about $11.50 each (50GB).

Please note this post only discusses Offsite backups. Many SOHOs do not provide offsite backup (risky??) and for online backups I use a spare disk drive attached to every office and family desktop.

Probably other alternatives exist for offsite backups, not the least of which is NAS data replication.  I didn’t list this as most SOHO customers are unlikely to have a secondary location where they could host the replicated data copy and the cost of a 2nd NAS box would need to be added along with the bandwidth between the primary and secondary site.  BUT for those sophisticated SOHO customers out there already using a NAS box for onsite shared storage maybe data replication might make sense. Deduplication backup appliances are another possibility but suffer similar disadvantages to NAS box replication and are even less likely to be already used by SOHO customers.


Ok where to now.  Given all this I M hoping to get a Blu-ray dl writer in my next iMac.  Let’s see that would cut my DVDdl swaps down by ~3.2X for single layer and ~6.5X for dl Blu-ray.  I could easily live with that until I quadrupled my data storage, again.

Although an iSCSI LTO-5 tape transport would make a real nice addition to the office…


Top 10 storage technologies over the last decade

Aurora's Perception or I Schrive When I See Technology by Wonderlane (cc) (from Flickr)
Aurora's Perception or I Schrive When I See Technology by Wonderlane (cc) (from Flickr)

Some of these technologies were in development prior to 2000, some were available in other domains but not in storage, and some were in a few subsystems but had yet to become popular as they are today.  In no particular order here are my top 10 storage technologies for the decade:

  1. NAND based SSDs – DRAM and other technology solid state drives (SSDs) were available last century but over the last decade NAND Flash based devices have dominated SSD technology and have altered the storage industry forever more.  Today, it’s nigh impossible to find enterprise class storage that doesn’t support NAND SSDs.
  2. GMR head– Giant Magneto Resistance disk heads have become common place over the last decade and have allowed disk drive manufacturers to double data density every 18-24 months.  Now GMR heads are starting to transition over to tape storage and will enable that technology to increase data density dramatically
  3. Data DeduplicationDeduplication technologies emerged over the last decade as a complement to higher density disk drives as a means to more efficiently backup data.  Deduplication technology can be found in many different forms today, ranging from file and block storage systems, backup storage systems, to backup software only solutions.
  4. Thin provisioning – No one would argue that thin provisioning emerged last century but it took the last decade to really find its place in the storage pantheon.  One almost cannot find a data center class storage device that does not support thin provisioning today.
  5. Scale-out storage – Last century if you wanted to get higher IOPS from a storage subsystem you could add cache or disk drives but at some point you hit a subsystem performance wall.  With scale-out storage, one can now add more processing elements to a storage system cluster without having to replace the controller to obtain more IO processing power.  The link reference talks about the use of commodity hardware to provide added performance but scale-out storage can also be done with non-commodity hardware (see Hitachi’s VSP vs. VMAX).
  6. Storage virtualizationserver virtualization has taken off as the dominant data center paradigm over the last decade but a counterpart to this in storage has also become more viable as well.  Storage virtualization was originally used to migrate data from old subsystems to new storage but today can be used to manage and migrate data over PBs of physical storage dynamically optimizing data placement for cost and/or performance.
  7. LTO tape When IBM dominated IT in the mid to late last century, the tape format dejour always matched IBM’s tape technology.  As the decade dawned, IBM was no longer the dominant player and tape technology was starting to diverge into a babble of differing formats.  As a result, IBM, Quantum, and HP put their technology together and created a standard tape format, called LTO, which has become the new dominant tape format for the data center.
  8. Cloud storage Unclear just when over the last decade cloud storage emerged but it seemed to be a supplement to cloud computing that also appeared this past decade.  Storage service providers had existed earlier but due to bandwidth limitations and storage costs didn’t survive the dotcom bubble. But over this past decade both bandwidth and storage costs have come down considerably and cloud storage has now become a viable technological solution to many data center issues.
  9. iSCSI SCSI has taken on many forms over the last couple of decades but iSCSI has the altered the dominant block storage paradigm from a single, pure FC based SAN to a plurality of technologies.  Nowadays, SMB shops can have block storage without the cost and complexity of FC SANs over the LAN networking technology they already use.
  10. FCoEOne could argue that this technology is still maturing today but once again SCSI has taken opened up another way to access storage. FCoE has the potential to offer all the robustness and performance of FC SANs over data center Ethernet hardware simplifying and unifying data center networking onto one technology.

No doubt others would differ on their top 10 storage technologies over the last decade but I strived to find technologies that significantly changed data storage that existed in 2000 vs. today.  These 10 seemed to me to fit the bill better than most.


SCI’s latest SPC-1&-1/E LRT results – chart of the month

(c) 2010 Silverton Consulting, Inc., All Rights Reserved
(c) 2010 Silverton Consulting, Inc., All Rights Reserved

It’s been a while since we reported on Storage Performance Council (SPC) Least Response Time (LRT) results (see Chart of the month: SPC LRT[TM]).  This is one of the charts we produce for our monthly dispatch on storage performance (quarterly report on SPC results).

Since our last blog post on this subject there have been 6 new entries in LRT Top 10 (#3-6 &, 9-10).  As can be seen here which combines SPC-1 and 1/E results, response times vary considerably.  7 of these top 10 LRT results come from subsystems which either have all SSDs (#1-4, 7 & 9) or have a large NAND cache (#5).    The newest members on this chart were the NetApp 3270A and the Xiotech Emprise 5000-300GB disk drives which were published recently.

The NetApp FAS3270A, a mid-range subsystem with 1TB of NAND cache (512MB in each controller) seemed to do pretty well here with all SSD systems doing better than it and a pair of all SSD systems doing worse than it.  Coming in under 1msec LRT is no small feat.  We are certain the NAND cache helped NetApp achieve their superior responsiveness.

What the Xiotech Emprise 5000-300GB storage subsystem is doing here is another question.  They have always done well on an IOPs/drive basis (see SPC-1&-1/E results IOPs/Drive – chart of the month) but being top ten in LRT had not been their forte, previously.  How one coaxes a 1.47 msec LRT out of a 20 drive system that costs only ~$41K, 12X lower than the median price(~$509K) of the other subsystems here is a mystery.  Of course, they were using RAID 1 but so were half of the subsystems on this chart.

It’s nice that some turnover in this top 10 LRT.  I still contend that response time is an important performance metric for many storage workloads (see my IO throughput vs. response time and why it matters post) and improvement over time validates my thesis.  Also I received many comments discussing the merits of database latencies for ESRP v3 (Exchange 2010) results, (see my Microsoft Exchange Perfomance ESRP v3.0 results – chart of the month post).  You can judge the results of that lengthy discussion for yourselves.

The full performance dispatch will be up on our website in a couple of weeks but if you are interested in seeing it sooner just sign up for our free monthly newsletter (see upper right) or subscribe by email and we will send you the current issue with download instructions for this and other reports.

As always, we welcome any constructive suggestions on how to improve our storage performance analysis.


One platform to rule them all – Compellent&EqualLogic&Exanet from Dell

Compellent drive enclosure (c) 2010 Compellent (from Compellent.com)
Compellent drive enclosure (c) 2010 Compellent (from Compellent.com)

Dell and Compellent may be a great match because Compellent uses commodity hardware combined with specialized software to create their storage subsystem. If there’s any company out there that can take advantage of commodity hardware it’s probably Dell. (Of course Commodity hardware always loses in the end, but that’s another story).

Similarly, Dell’s EqualLogic iSCSI storage system uses commodity hardware to provide its iSCSI storage services.  It doesn’t take a big leap of imagination to have one storage system that combines the functionality of EqualLogic’s iSCSI and Compellent’s FC storage capabilities.  Of course there are others already doing this including Compellent themselves which have their own iSCSI support already built into their FC storage system.

Which way to integrate?

Does EqualLogic survive such a merger?  I think so.  It’s easy to imagine that Equal Logic may have the bigger market share today. If that’s so, the right thing might be  to merge Compellent FC functionality into EqualLogic.  If Compellent has the larger market, the correct approach is the opposite. The answer lies probably with a little of both.  It seems easiest to add iSCSI functionality to a FC storage system than the converse but the FC to iSCSI approach may be the optimum path for Dell, because of the popularity of their EqualLogic storage.

What about NAS?

The only thing missing from this storage system is NAS.  Of course the Compellent storage offers a NAS option through the use of a separate Windows Storage Server (WSS) front end.  Dell’s EqualLogic does the much the same to offer NAS protocols for their iSCSI system.  Neither of these are bad solutions but they are not a fully integrated NAS offering such as available from NetApp and others.

However, there is a little discussed part, the Dell-Exanet acquisition which happened earlier this year. Perhaps the right approach is to integrate Exanet with Compellent first and target this at the high end enterprise/HPC market place, keeping Equal Logic at the SMB end of the marketplace.  It’s been a while since I have heard about Exanet, and nothing since the acquisition earlier this year.  Does it make sense to backend a clustered NAS solution with FC storage – probably.


Much of this seems doable to me, but it all depends on taking the right moves once the purchase is closed.   But if I look at where Dell is weakest (baring their OEM agreement with EMC), it’s in the highend storage space.  Compellent probably didn’t have much of a foot print there as possible due to their limited distribution and support channel.  A Dell acquisition could easily eliminate these problems and open up this space without having to do much other than start to marketing, selling and supporting Compellent.

In the end, a storage solution supporting clustered NAS, FC, and iSCSI that combined functionality equivalent to Exanet, Compellent and EqualLogic based on commodity hardware (ouch!) could make a formidable competitor to what’s out there today if done properly. Whether Dell could actually pull this off and in a timely manner even if they purchase Compellent, is another question.


Storage throughput vs. IO response time and why it matters

Fighter Jets at CNE by lifecreation (cc) (from Flickr)
Fighter Jets at CNE by lifecreation (cc) (from Flickr)

Lost in much of the discussions on storage system performance is the need for both throughput and response time measurements.

  • By IO throughput I generally mean data transfer speed in megabytes per second (MB/s or MBPS), however another definition of throughput is IO operations per second (IO/s or IOPS).  I prefer the MB/s designation for storage system throughput because it’s very complementary with respect to response time whereas IO/s can often be confounded with response time.  Nevertheless, both metrics qualify as storage system throughput.
  • By IO response time I mean the time it takes a storage system to perform an IO operation from start to finish, usually measured in milleseconds although lately some subsystems have dropped below the 1msec. threshold.  (See my last year’s post on SPC LRT results for information on some top response time results).

Benchmark measurements of response time and throughput

Both Standard Performance Evaluation Corporation’s SPECsfs2008 and Storage Performance Council’s SPC-1 provide response time measurements although they measure substantially different quantities.  The problem with SPECsfs2008’s measurement of ORT (overall response time) is that it’s calculated as a mean across the whole benchmark run rather than a strict measurement of least response time at low file request rates.  I believe any response time metric should measure the minimum response time achievable from a storage system although I can understand SPECsfs2008’s point of view.

On the other hand SPC-1 measurement of LRT (least response time) is just what I would like to see in a response time measurement.  SPC-1 provides the time it takes to complete an IO operation at very low request rates.

In regards to throughput, once again SPECsfs2008’s measurement of throughput leaves something to be desired as it’s strictly a measurement of NFS or CIFS operations per second.  Of course this includes a number (>40%) of non-data transfer requests as well as data transfers, so confounds any measurement of how much data can be transferred per second.  But, from their perspective a file system needs to do more than just read and write data which is why they mix these other requests in with their measurement of NAS throughput.

Storage Performance Council’s SPC-1 reports throughput results as IOPS and provide no direct measure of MB/s unless one looks to their SPC-2 benchmark results.  SPC-2 reports on a direct measure of MBPS which is an average of three different data intensive workloads including large file access, video-on-demand and a large database query workload.

Why response time and throughput matter

Historically, we used to say that OLTP (online transaction processing) activity performance was entirely dependent on response time – the better storage system response time, the better your OLTP systems performed.  Nowadays it’s a bit more complex, as some of todays database queries can depend as much on sequential database transfers (or throughput) as on individual IO response time.  Nonetheless, I feel that there is still a large component of response time critical workloads out there that perform much better with shorter response times.

On the other hand, high throughput has its growing gaggle of adherents as well.  When it comes to high sequential data transfer workloads such as data warehouse queries, video or audio editing/download or large file data transfers, throughput as measured by MB/s reigns supreme – higher MB/s can lead to much faster workloads.

The only question that remains is who needs higher throughput as measured by IO/s rather than MB/s.  I would contend that mixed workloads which contain components of random as well as sequential IOs and typically smaller data transfers can benefit from high IO/s storage systems.  The only confounding matter is that these workloads obviously benefit from better response times as well.   That’s why throughput as measured by IO/s is a much more difficult number to understand than any pure MB/s numbers.


Now there is a contingent of performance gurus today that believe that IO response times no longer matter.  In fact if one looks at SPC-1 results, it takes some effort to find its LRT measurement.  It’s not included in the summary report.

Also, in the post mentioned above there appears to be a definite bifurcation of storage subsystems with respect to response time, i.e., some subsystems are focused on response time while others are not.  I would have liked to see some more of the top enterprise storage subsystems represented in the top LRT subsystems but alas, they are missing.

1954 French Grand Prix - Those Were The Days by Nigel Smuckatelli (cc) (from Flickr)
1954 French Grand Prix - Those Were The Days by Nigel Smuckatelli (cc) (from Flickr)

Call me old fashioned but I feel that response time represents a very important and orthogonal performance measure with respect to throughput of any storage subsystem and as such, should be much more widely disseminated than it is today.

For example, there is a substantive difference a fighter jet’s or race car’s top speed vs. their maneuverability.  I would compare top speed to storage throughput and its maneuverability to IO response time.  Perhaps this doesn’t matter as much for a jet liner or family car but it can matter a lot in the right domain.

Now do you want your storage subsystem to be a jet fighter or a jet liner – you decide.

The future of data storage is MRAM

Core Memory by teclasorg
Core Memory by teclasorg

We have been discussing NAND technology for quite awhile now but this month I ran across an article in IEEE Spectrum titled “a SPIN to REMEMBER – Spintronic memories to revolutionize data storage“. The article discussed a form of magneto-resistive random access memory or MRAM that uses quantum mechanical spin effects or spintronics to record data. We have talked about MRAM technology before and progress has been made since then.

Many in the industry will recall that current GMR (Giant Magneto-resistance) heads and TMR (Tunnel magneto-resistance) next generation disk read heads already make use of spintronics to detect magnetized bit values in disk media. GMR heads detect bit values on media by changing its electrical resistance.

Spintronics however can also be used to record data as well as read it. These capabilities are being exploited in MRAM technology which uses a ferro-magnetic material to record data in magnetic spin alignment – spin UP, means 0; spin down, means 1 (or vice versa).

The technologists claim that when MRAM reaches its full potential it could conceivably replace DRAM, SRAM, NAND, and hard disk drives or all current electrical and magnetic data storage. Some of MRAM’s advantages include unlimited write passes, fast reads and writes and data non-volatilility.

MRAM reminds me of old fashioned magnetic core memory (in photo above) which used magnetic polarity to record non-volatile data bits. Core was a memory mainstay in the early years of computing before the advent of semi-conductor devices like DRAM.

Back to future – MRAM

However, the problems with MRAM today are that it is low-density, takes lots of power and is very expensive. But technologists are working on all these problems with the view that the future of data storage will be MRAM. In fact, researchers at the North Carolina State University (NCSU) Electrical Engineering department have been having some success with reducing power requirements and increasing density.

As for data density NCSU researchers now believe they can record data in cells approximating 20 nm across, better than current bit patterned media which is the next generation disk recording media. However reading data out of such a small cell will prove to be difficult and may require a separate read head on top of each cell. The fact that all of this is created with normal silicon fabrication methods make doing so at least feasible but the added chip costs may be hard to justify.

Regarding high power, their most recent design records data by electronically controlling the magnetism of a cell. They are using dilute magnetic semiconductor material doped with gallium maganese which can hold spin value alignment (see the article for more information). They are also using a semiconductor p-n junction on top of the MRAM cell. Apparently at the p-n junction they can control the magnetization of the MRAM cells by applying -5 volts or removing this. Today the magnetization is temporary but they are also working on solutions for this as well.

NCSU researchers would be the first to admit that none of this is ready for prime time and they have yet to demonstrate in the lab a MRAM memory device with 20nm cells, but the feeling is it’s all just a matter of time and lot’s of research.

Fortunately, NCSU has lots of help. It seems Freescale, Honeywell, IBM, Toshiba and Micron are also looking into MRAM technology and its applications.


Let’s see, using electron spin alignment in a magnetic medium to record data bits, needs a read head to read out the spin values – couldn’t something like this be used in some sort of next generation disk drive that uses the ferromagnetic material as a recording medium. Hey, aren’t disks already using a ferromagnetic material for recording media? Could MRAM be fabricated/layed down as a form of magnetic disk media?? Maybe there’s life in disks yet….

What do you think?

What’s wrong with the iPad?

Apple iPad (wi-fi) (from apple.com)
Apple iPad (wi-fi) (from apple.com)

We have been using the wi-fi iPad for just under 6 months now and I have a few suggestions to make it even easier to use.


Aside from the problem with lack of Flash support there are a few things that would make websurfing easier on the iPad:

  • Tabbed windows option – I use tabbed windows on my desktop/laptop all the time but for some reason on the iPad Apple chose to use a grid of distinct windows accessible via a Safari special purpose icon.  While this approach probably makes a lot of sense for the iPhone/iPod, there is little reason to only do this on the iPad.  There is ample screen real-estate to show tabs selectable with the touch of a finger.  As it is now, it takes two touches to select an alternate screen for web browsing, not to mention some time to paint the thumbnail screen when you have multiple web pages open.
  • Non-mobile mode – It seems that many websites nowadays detect whether one is accessing a web page from a mobile device or not and as such, shrink their text/window displays to accommodate their much smaller display screen.  With the iPad this shows up as a wasted screen space and takes more than necessary screen paging to get to data that retrievable on a single screen with a desktop/laptop.  Not sure whether the problem is in the web server or the iPad’s signaling what device it is, however it seems to me that if the iPad/Safari app could signal to web servers that it is a laptop/small-desktop, web browsing could be better.

Other Apps

There are a number of Apps freely available on the iPhone/iPod that are not available on the iPad without purchase.  For some reason, I find I can’t live without some of these:

  • Clock app – On the iPhone/iPod I use the clock app at least 3 times a day.  I time my kids use of video games, my own time to having to do something, how much time I am willing/able to spend on a task, and myriad other things.  It’s one reason why I keep the iPhone on my body or close by whenever I am at home.  I occasionally use the clock app as a stop watch and a world clock but what I really need on the iPad is a timer of some sort.  I really have been unable to find an equivalent app for the iPad that matches the functionality of the iPhone/iPod Clock app.
  • Calculator app – On the iPhone/iPod I use the calculator sporadically, mostly when I am away from my desktop/office (probably because I have a calculator on my desk).  However, I don’t have other calculators that are easily accessible throughout my household and having one on the iPad would just make my life easier.  BTW, I ended up purchasing a calculator app that Apple says is equal to the iPhone Calc App which works fine but it should have come free.
  • Weather app – This is probably the next most popular app on my iPhone.  I know this information is completely available on the web, but by the time I have to enter the url/scan my bookmarks it takes at least 3-4 touches to get the current weather forecast.  By having the Weather app available on the iPhone it takes just one touch to get this same information.  I believe there is some way to transform a web page into an app icon on the iPad but this is not the same.

IOS software tweaks

There are some things I think could make IOS much better from my standpoint and I assume all the stuff in IOS 4.2 will be coming shortly so I won’t belabor those items:

  • File access – This is probably heresy but, I would really like a way to be able to cross application boundaries to access all files on the iPad.  That is, have something besides Mail, iBook and Pages be able to access PDF file, and Mail, Photo, and Pages/Keynote be able to access photos. Specifically, some of the FTP upload utilities should be able to access any file on the iPad.  Not sure where this belongs but there should be some sort of data viewer at the IOS level that can allow access to any file on the iPad.
  • Dvorak soft keypad – Ok, maybe I am a bit weird, but I spent the time and effort to learn the Dvorak keyboard layout to be able to type faster and would like to see this same option available for the iPad soft keypad.  I currently use Dvorak with the iPad’s external BT keyboard hardware but I see no reason that it couldn’t work for the soft keypad as well.
  • Widgets – The weather app discussed above looks to me like the weather widget on my desktop iMac.  It’s unclear why IOS couldn’t also support other widgets so that the app developers/users could easily create use their desktop widgets on the iPad.

iPad hardware changes

There are some things that scream out to me for hardware changes.

  • Ethernet access – I have been burned before and wish not to be burned again but some sort of adaptor that would allow an Ethernet plug connection would make the tethered iPad a much more complete computing platform.  I don’t care if such a thing comes as a BlueTooth converter or has to use the same plug as the power adaptor but having this would just make accessing the internet (under some circumstances) that much easier.
  • USB access – This just opens up another whole dimension to storage access and information/data portability that is sorely missing from the iPad.  It would probably need some sort of “file access” viewer discussed above but it would make the iPad much more extensible as a computing platform.
  • Front facing camera – I am not an avid user of FaceTime (yet) but if I were, I would really need a front camera on the iPad.  Such a camera would also provide some sort of snapshot capability with the iPad (although a rear facing camera would make more sense for this).  In any event, a camera is a very useful device to record whiteboard notes, scan paper documents, and record other items of the moment and even a front-facing one could do this effectively.
  • Solar panels – Probably off the wall, but having to lug a power adaptor everywhere I go with the iPad is just another thing to misplace/loose.  Of course, when traveling to other countries, one also needs a plug adaptor for each country as well.  It seems to me having some sort of solar panel on the back or front could provide adequate power to charge the iPad would be that much simpler.


Well that’s about it for now.   We are planning on taking a vacation soon and we will be taking both a laptop and the iPad (because we can no longer live without it).  I would rather just leave the laptop home but can’t really do that given my problems in the past with the iPad.  Some changes described above could make hauling the laptop on vacation a much harder decision.

As for how the iPad fares on the beach, I will have to let you know…