Disk rulz, at least for now

Last week WDC announced their next generation technology for hard drives, MAMR or Microwave Assisted Magnetic Recording. This is in contrast to HAMR, Heat (laser) Assisted Magnetic Recording. Both techniques add energy so that data can be written as smaller bits on a track.

Disk density drivers

Current hard drive technology uses PMR or Perpendicular Magnetic Recording with or without SMR (Shingled Magnetic Recording) and TDMR (Two Dimensional Magnetic Recording), both of which we have discussed before in prior posts.

The problem with PMR-SMR-TDMR is that the max achievable disk density is starting to flat line and approaching the “WriteAbility limit” of the head-media combination.

That is even with TDMR, SMR and PMR heads, the highest density that can be achieved is ~1.1Tb/sq.in. The Writeability limit for the current PMR head-media technology is ~1.4Tb/sq.in. As a result most disk density increases over the past years has been accomplished by adding platters-heads to hard drives.

MAMR and HAMR both seem able to get disk drives to >4.0Tb/sq.in. densities by adding energy to the magnetic recording process, which allows the drive to record more data in the same (grain) area.

There are two factors which drive disk drive density (Tb/sq.in.): Bits per inch (BPI) and Tracks per inch (TPI). Both SMR and TDMR were techniques to add more TPI.

I believe MAMR and HAMR increase BPI beyond whats available today by writing data on smaller magnetic grain sizes (pitch in chart) and thus more bits in the same area. At 7nm grain sizes or below PMR becomes unstable, but HAMR and MAMR can record on grain sizes of 4.5nm which would equate to >4.5Tb/sq.in.

HAMR hurdles

It turns out that HAMR as it uses heat to add energy, heats the media drives to much higher temperatures than what’s normal for a disk drive, something like 400C-700C.  Normal operating temperatures for disk drives is  ~50C.  HAMR heat levels will play havoc with drive reliability. The view from WDC is that HAMR has 100X worse reliability than MAMR.

In order to generate that much heat, HAMR needs a laser to expose the area to be written. Of course the laser has to be in the head to be effective. Having to add a laser and optics will increase the cost of the head, increase the steps to manufacture the head, and require new suppliers/sourcing organizations to supply the componentry.

HAMR also requires a different media substrate. Unclear why, but HAMR seems to require a glass substrate, the magnetic media (many layers) is  deposited ontop of the glass substrate. This requires a new media manufacturing line, probably new suppliers and getting glass to disk drive (flatness-bumpiness, rotational integrity, vibrational integrity) specifications will take time.

Probably more than a half dozen more issues with having laser light inside a hard disk drive but suffice it to say that HAMR was going to be a very difficult transition to perform right and continue to provide today’s drive reliability levels.

MAMR merits

MAMR uses microwaves to add energy to the spot being recorded. The microwaves are generated by a Spin Torque Oscilator, (STO), which is a solid state device, compatible with CMOS fabrication techniques. This means that the MAMR head assembly (PMR & STO) can be fabricated on current head lines and within current head mechanisms.

MAMR doesn’t add heat to the recording area, it uses microwaves to add energy. As such, there’s no temperature change in MAMR recording which means the reliability of MAMR disk drives should be about the same as todays disk drives.

MAMR uses todays aluminum substrates. So, current media manufacturing lines and suppliers can be used and media specifications shouldn’t have to change much (?) to support MAMR.

MAMR has just about the same max recording density as HAMR, so there’s no other benefit to going to HAMR, if MAMR works as expected.

WDC’s technology timeline

WDC says they will have sample MAMR drives out next year and production drives out in 2019. They also predict an enterprise 40TB MAMR drive by 2025. They have high confidence in this schedule because MAMR’s compatabilitiy with  current drive media and head manufacturing processes.

WDC discussed their IP position on HAMR and MAMR. They have 400+ issued HAMR patents with another 100+ pending and 75 issued MAMR patents with 46 more pending. Quantity doesn’t necessarily equate to quality, but their current IP position on both MAMR and HAMR looks solid.

WDC believes that by 2020, ~90% of enterprise data will be stored on hard drives. However, this is predicated on achieving a continuing, 10X cost differential between disk drives and (QLC 3D) flash.

What comes after MAMR is subject of much speculation. I’ve written on one alternative which uses liquid Nitrogen temperatures with molecular magnets, I called CAMR (cold assisted magnetic recording) but it’s way to early to tell.

And we have yet to hear from the other big disk drive leader, Seagate. It will be interesting to hear whether they follow WDC’s lead to MAMR, stick with HAMR, or go off in a different direction.

Comments?

 

Photo Credit(s): WDC presentation

Disk density hits new record, 1Tb/sqin with HAMR

Seagate has achieved 1Tb/sqin recording (source: http://www.gizmag.com)
Seagate has achieved 1Tb/sqin recording (source: http://www.gizmag.com)

Well I thought 36TB on my Mac was going to be enough.  Then along comes Seagate with this weeks announcement of reaching 1Tb/sqin (1 Trillion bits per square inch) using their new HAMR (heat assisted magnetic recording) technology.

Current LFF drive technology runs at about 620Gb/sqin providing a  3.5″ drive capacity of around 3TB or about 500Gb/sqin for 2.5″ drives supporting ~750GB.  The new 1Tb/sqin drives will easily double these capacities.

But the exciting part is that with the new HAMR or TAR (thermally assisted recording) heads and media, the long term potential is even brighter.  This new technology should be capable of 5 to 10Tb/sqin which means 3.5″ drives of 30 to 60TB and 2.5″ drives of 10 t0 20TB.

HAMR explained

HAMR uses both lasers and magnetic heads to record data in even smaller spaces than current PMR (perpendicular magnetic recording) or vertical recording heads do today.   You may recall that PMR was introduced in 2006 and now, just 6 years later we are already seeing the next generation head and media technologies in labs.

Denser disks requires smaller bits and with smaller bits disk technology runs into three problems readability, writeability and stability, AKA the magnetic recording trilemma.  Smaller bits require better stability, but better stability makes it much harder to write or change a bits magnetic orientation.  Enter the laser in HAMR, with laser heating the bits can become much more maleable.  These warmed bits can be more easily written bypassing the stability-writeability problem, at least for now.

However, just as in any big technology transition there are other competing ideas with the potential to win out.  One possibility we have discussed previously is shingled writes using bit patterned media (see my Sequential only disk post) but this requires a rethinking/re-architecting of disk storage.  As such, at best it’s an offshoot of today’s disk technology and at worst, it’s a slight detour on the overall technology roadmap.

Of course PMR is not going away any time soon. Other vendors (and proboblf Seagate) will continue to push PMR technology as far as it can go.  After all, it’s a proven technology, inside millions of spinning disks today.  But, according to Seagate, it can achieve 1Tb/sqin but go no further.

So when can I get HAMR disks

There was no mention in the press release as to when HAMR disks would be made available to the general public, but typically the drive industry has been doubling densities every 18 to 24 months.  Assuming they continue this trend across a head/media technology transition like HAMR, we should have those 6GB hard disk drives sometime around 2014, if not sooner.

HAMR technology will likely make it’s first appearance in 72oorpm drives.  Bigger capacities seem to always first come out in slower performing disks (see my Disk trends, revisited post)

HAMR performance wasn’t discussed in the Seagate press release, but with 2Mb per linear track inch and 15Krpm disk drives, the transfer rates would seem to need to be on the order of at least 850MB/sec at the OD (outer diameter) for read data transfers.

How quickly HAMR heads can write data is another matter. The fact that the laser heats the media before the magnetic head can write it seems to call for a magnetic-plus-optical head contraption where the laser is in front of the magnetics (see picture above).

How long it takes to heat the media to enable magnetization is one critical question in write performance. But this could potential be mitigated by the strength of the laser pulse and how far the  laser has to be in front of the recording head.

With all this talk of writing, there hasn’t been lots of discussion on read heads. I guess everyone’s assuming the current PMR read heads will do the trick, with a significant speed up of course, to handle the higher linear densities.

What’s next?

As for what comes after HAMR, checkout another post I did on using lasers to magnetize (write) data (see Magnetic storage using lasers alone).  The advantage of this new “laser-only” technology was a significant speed up in transfer speeds.  It seems to me that HAMR could easily be an intermediate step on the path to laser-only recording having both laser optics and magnetic recording/reading heads in one assembly.

~~~~

Lets see 6TB in 2014, 12TB in 2016 and 24TB in 2018, maybe I won’t need that WD Thunderbolt drive string as quickly as I thought.

Comments?

 

 

Magnetic storage using lasers alone

Lasers by dmuth (cc) (from Flickr)
Lasers by dmuth (cc) (from Flickr)

Read an article today on AAAS Science Now online magazine (See Hot Idea for a Faster Hard Drive) on using lasers alone to toggle magnetic moments in specially designed ferro-magnetic materials.

The disk industry has been experimenting with bit patterned media/shingled writes (see our post on Sequential Only Disk)  and thermally or heat assisted magnetic recording (TAR or HAMR) heads for some time now.  The TAR/HAMR heads use both magnetization and heat to quickly change magnetic moments in ferro-magnetic material (see our post on When will disks become extinct).

Laser’s can magnetize!!

The new study seems to be able to do away with the magnetic recording mechanism and is able to change the magnetic value with a short focused laser burst alone.   But what does this mean for the future of disk drives.

Well one thing the article highlights is that with the new technology disks can transfer data much faster than today.   Apparently magnetic recording takes a certain interval of time (1 nanosecond) per bit and getting below that threshold was previously unattainable.

That is until this new laser magnetization came along.  According to the article they can reliably change a bits magnetic value in 1/1000th of a nanosecond with heat alone.  This may enable disk data transfers a 100X faster than available today.

Seagate’s 600GB-15Krpm 15.7 Cheetah disk has an sustained data transfer rate of from 122 to 204 MB/sec (see their 15K.7 Cheetah drive data sheet).  A 100 times that we will need a much faster interface than 16Gb/s FC which probably only transfers data at ~1600 MB/sec burst which means these drives will need like 128Gb/s FC.  In addition to the data transfer speed up, with the laser pulse alone it is much more energy efficient than the HAMR heads which need both magnetics and laser.

How soon such advances will make their way into disk drives is another question.

Is today’s 15Krpm disk speed limit due to writing speeds?

I have been struck for some time now why 3.5″ disk drives never went faster than 15Krpm.  I had always surmised it was something to do with the material mechanics at the outer diameter that limited the rotational speed.

Then when drives were shrunk to 2.5″ I thought we would see some faster rotational speed, but it never happened.  Perhaps magnetic write speeds are the problem.   At the 204MB/sec we are reading bits in under a nanosecond but write sustained data transfer is another question.  Maybe there will be a 22Krpm disk in my future?

Fixed head disks déjà vu

Ok now that that’s settled we need to work on speeding up seek times.  I  could see some sort of a rotating diffraction grating or diffraction comb taking the laser and splitting it up into multiple beams to cover each track at almost the same time, sort of like a fixed head disk of old (see IBM 2305).  This would allow disks to write seek to any track in microseconds rather than milliseconds and write data in picoseconds rather than nanoseconds.

How to do something like this for reading data off a track is yet another question.  It’s too bad we couldn’t use the laser alone to read the magnetic information as well as write it.

If you could do that and use a similar diffraction grating/comb for reading data, one could conceivably create a cost effective, competitive solution to the performance of SSD technology.  And that would be very interesting device indeed!

Comments?

 

 

Will Hybrid drives conquer enterprise storage?

Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)
Toyota Hybrid Synergy Drive Decal: RAC Future Car Challenge by Dominic's pics (cc) (from Flickr)

I saw where Seagate announced the next generation of their Momentus XT Hybrid (SSD & Disk) drive this week.  We haven’t discussed Hybrid drives much on this blog but it has become a viable product family.

I am not planning on describing the new drive specs here as there was an excellent review by Greg Schulz at StorageIOblog.

However, the question some in the storage industry have had is can Hybrid drives supplant data center storage.  I believe the answer to that is no and I will tell you why.

Hybrid drive secrets

The secret to Seagate’s Hybrid drive lies in its FAST technology.  It provides a sort of automated disk caching that moves frequently accessed OS or boot data to NAND/SSD providing quicker access times.

Storage subsystem caching logic has been around in storage subsystems for decade’s now, ever since the IBM 3880 Mod 11&13 storage control systems came out last century.  However, these algorithms have gotten much more sophisticated over time and today can make a significant difference in storage system performance.  This can be easily witnessed by the wide variance in storage system performance on a per disk drive basis (e.g., see my post on Latest SPC-2 results – chart of the month).

Enterprise storage use of Hybrid drives?

The problem with using Hybrid drives in enterprise storage is that caching algorithms are based on some predictability of access/reference patterns.  When you have a Hybrid drive directly connected to a server or a PC it can view a significant portion of server IO (at least to the boot/OS volume) but more importantly, that boot/OS data is statically allocated, i.e., doesn’t move around all that much.   This means that one PC session looks pretty much like the next PC session and as such, the hybrid drive can learn an awful lot about the next IO session just by remembering the last one.

However, enterprise storage IO changes significantly from one storage session (day?) to another.  Not only are the end-user generated database transactions moving around the data, but the data itself is much more dynamically allocated, i.e., moves around a lot.

Backend data movement is especially true for automated storage tiering used in subsystems that contain both SSDs and disk drives. But it’s also true in systems that map data placement using log structured file systems.  NetApp Write Anywhere File Layout (WAFL) being a prominent user of this approach but other storage systems do this as well.

In addition, any fixed, permanent mapping of a user data block to a physical disk location is becoming less useful over time as advanced storage features make dynamic or virtualized mapping a necessity.  Just consider snapshots based on copy-on-write technology, all it takes is a write to have a snapshot block be moved to a different location.

Nonetheless, the main problem is that all the smarts about what is happening to data on backend storage primarily lies at the controller level not at the drive level.  This not only applies to data mapping but also end-user/application data access, as cache hits are never even seen by a drive.  As such, Hybrid drives alone don’t make much sense in enterprise storage.

Maybe, if they were intricately tied to the subsystem

I guess one way this could all work better is if the Hybrid drive caching logic were somehow controlled by the storage subsystem.  In this way, the controller could provide hints as to which disk blocks to move into NAND.  Perhaps this is a way to distribute storage tiering activity to the backend devices, without the subsystem having to do any of the heavy lifting, i.e., the hybrid drives would do all the data movement under the guidance of the controller.

I don’t think this likely because it would take industry standardization to define any new “hint” commands and they would be specific to Hybrid drives.  Barring standards, it’s an interface between one storage vendor and one drive vendor.  Probably ok if you made both storage subsystem and hybrid drives but there aren’t any vendor’s left that does both drives and the storage controllers.

~~~~

So, given the state of enterprise storage today and its continuing proclivity to move data around accross its backend storage,  I believe Hybrid drives won’t be used in enterprise storage anytime soon.

Comments?

 

Disk capacity growing out-of-sight

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

Last week, Hitachi Global Storage Division(acquired by Western Digital, closing in 4Q2011) and Seagate announced some higher capacity disk drives for desk top applications over the past week.

Most of us in the industry have become somewhat jaded with respect to new capacity offerings. But last weeks announcements may give one pause.

Hitachi announced that they are shipping over 1TB/disk platter using 3.5″ platters shipping with 569Gb/sqin technology.  In the past 4-6 platter disk drives were available in shipped disk drives using full height, 3.5″ drives.  Given the platter capacity available now, 4-6TB drives are certainly feasible or just around the corner. Both Seagate and Samsung beat HGST to 1TB platter capacities which they announced in May of this year and began shipping in drives in June.

Speaking of 4TB drives, Seagate announced a new 4TB desktop external disk drive.  I couldn’t locate any information about the number of platters, or Gb/sqin of their technology, but 4 platters are certainly feasible and as a result, a 4TB disk drive is available today.

I don’t know about you, but 4TB disk drives for a desktop seem about as much as I could ever use. But when looking seriously at my desktop environment my CAGR for storage (revealed as fully compressed TAR files) is ~61% year over year.  At that rate, I will need a 4TB drive for backup purposes in about 7 years and if I assume a 2X compression rate then a 4TB desktop drive will be needed in ~3.5 years, (darn music, movies, photos, …).  And we are not heavy digital media consumers, others that shoot and edit their own video probably use orders of magnitude more storage.

Hard to believe, but given current trends inevitable,  a 4TB disk drive will become a necessity for us within the next 4 years.

—-

Comments?

 

 

 

 

Are SSDs an invasive species?

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

I was reading about pythons becoming an invasive species in the Florida Everglades and that brought to mind SSDs.  The current ecological niche in data storage has rotating media as the most prolific predator with tape going on the endangered species list in many locales.

So where does SSD enter into the picture.  We have written before on SSD shipments start to take off but that was looking at the numbers from another direction. Given recent announcements it appears that in the enterprise, SSDs seem to be taking over the place formerly held by 15Krpm disk devices.  These were formerly the highest performers and most costly storage around.  But today, SSDs, as a class of storage, are easily the most costly storage and have the highest performance currently available.

The data

Seagate announced yesterday that they had shipped almost 50M disk drives last quarter up 8% from the prior quarter or ~96M drives over the past 6 months.  Now Seagate is not the only enterprise disk provider (Hitachi, Western Digital and others also supply this market) but they probably have the lion’s share.  Nonetheless, Seagate did mention that the last quarter was supply constrained and believed that the total addressible market was 160-165M disk drives.  That puts Seagate’s market share (in unit volume) at ~31% and at that rate the last 6 months total disk drive production should have been ~312M units.

In contrast, IDC reports that SSD shipments last year totaled 11m units. In both the disk and SSD cases we are not just talking enterprise class devices, the numbers include PC storage as well.  If we divide this number in half we have a comparable number of 5.5M SSDs for the last 6 months, giving SSDs less than a 2% market share (in units).

Back to the ecosystem.  In the enterprise, there are 15Krpm disks, 10Krpm disks and 7.2Krpm rotating media disks.  As speed goes down, capacity goes up.  In Seagate’s last annual report they stated that approximately 10% of the drives they manufactured were shipped to the enterprise.  Given that rate, of the 312M drives, maybe 31M were enterprise class (this probably overstates the number but usable as an upper bound).

As for SSDs, in the IDC report cited above, they mentioned two primary markets the PC and enterprise markets for SSD penetration.  In that same Seagate annual report, they said their desktop and mobile markets were around 80% of disk drives shipped.  If we use that proportion for SSDs that would say that of the 5.5M units shipped last half year, 4.4 were in the PC space and 1.1M were for the enterprise.  Given that, it would state that the enterprise class SSDs represent ~3.4% of the enterprise class disk drives shipped.  This is over 10X more than my prior estimate of SSDs being (<0.2%) of enterprise disk drives.  Reality probably lies somewhere between these two estimates.

I wrote a research report a while back which predicted that SSDs would never take off in the enterprise, I was certainly wrong then.  If these numbers are correct, capturing 10% of the enterprise disk market in little under 2 years can only mean that high-end, 15Krpm drives are losing ground faster than anticipated.  Which brings up the analogy of the invasive species.  SSDs seem to be winning a significant beach head in the enterprise market.

In the mean time, drive vendors are fighting back by moving from the 3.5″ to 2.5″ form factor, offering both 15K and 10K rpm drives.   This probably means that the 15Krpm 3.5″ drive’s days are numbered.

I made another prediction almost a decade ago that 2.5″ drives would take over the enterprise around 2005 – wrong again, but only by about 5 years or so. I got to stop making predictions, …

Seagate launches their Pulsar SSD

Seagate's Pulsar SSD (seagate.com)
Seagate's Pulsar SSD (seagate.com)

Today Seagate announced their new SSD offering, named the Pulsar SSD.  It uses SLC NAND technology and comes in a 2.5″ form factor at 50, 100 or 200GB capacity.  The fact that it uses a 3GB/s SATA interface seems to indicate that Seagate is going after the server market rather than the highend storage market place but different interfaces can be added over time.

Pulsar SSD performance

The main fact that makes the Pulsar interesting is the peak write rates at 25,000 4KB aligned writes per second versus a peak read rate of 30,000.  The ratio of peak reads to peak writes 30:25 represents a significant advance over prior SSDs and presumably this is through the magic of buffering.  But once we get beyond peak IO buffering sustained 128KB writes drops to 2600, 5300, or 10,500 ops/sec for the 50, 100, and 200GB drives respectively.  Kind of interesting that this drops as capacity drops and implies that adding capacity also adds parallelism. Sustained 4KB reads for the Pulsar is speced at 30,000.

In contrast, STEC’s Zeus drive is speced at 45,000 random reads and 15,000 random writes sustained and 80,000 peak reads and 40,000 peak writes.  So performance wise the Seagate Pulsar (200GB) SSD has about ~37% the peak read and ~63% the peak write performance with ~67% the sustained read with ~70% the sustained write performance of the Zeus drive.

Pulsar reliability

The other items of interest is that Seagate states a 0.44% annual failure rate (AFR), so for a 100 Pulsar drive storage subsystem one Pulsar drive will fail every 2.27 years.  Also the Pulsar bit error rate (BER) is specified at <10E16 new and <10E15 at end of life.  As far as I can tell both of these specifications are better than STEC’s specs for the Zeus drive.

Both the Zeus and Pulsar drives support a 5 year limited warranty.  But if the Pulsar is indeed a more reliable drive as indicated by their respective specifications, vendors may prefer the Pulsar as it would require less service.

All this seems to say that reliability may become a more important factor in vendor SSD selection. I suppose once you get beyond 10K read or write IOPs per drive, performance differences just don’t matter that much. But a BER of 10E14 vs 10E16 may make a significant difference to product service cost and as such, may justify changing SSD vendors much easier. Seems to be opening up a new front in the SSD wars – drive reliability

Now if they only offered 6GB/s SAS or 4GFC interfaces…

The coming hard drive capacity wall?

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
Hard drives have been on a capacity tear lately what with perpendicular magnetic recording and tunneling magnetoresistive heads. As evidence of this, Seagate just announced their latest Barracuda XT, a 2TB hard drive with 4 platters with ~500GB/platter at 368Gb/sqin recording density.

Read-head technology limits

Recently, I was at a Rocky Mountain IEEE Magnetics Society seminar where Bruce Gurney, Ph. D., from Hitachi Global Storage Technologies (HGST) said there was no viable read head technology to support anything beyond 1Tb/sqin recording densities. Now in all fairness this was a public lecture and not under NDA but it’s obvious the (read head) squeeze is on.

Assuming, it’s a relatively safe bet that densities achieving ~1Tb/sqin can be attained with today’s technology, that means another ~3X in drive capacity is achievable using current read-heads. Hence, for a 4 platter configuration, we can expect a ~6TB drive in the near future using today’s read-heads.

But what happens next?

Unclear to me how quickly drive vendor deliver capacity increases these days, but in recent past a doubling in capacity occurred every 18-24 months. Assuming this holds for today’s technology, somewhere in the next 24-36 months or so we will see a major transition to a new read-head technology.

Thus, for today’s drive industry must spend major R&D $’s to discover, develop, and engineer a brand new read-head technology to take drive capacity to the next chapter. What this means for write heads, media smoothness, and head flying height is unknown.

Read-head development

At the seminar mentioned above, Bruce had some interesting charts which showed how long previous read-head technology took to develop and how long they were used to produce drives. According to Bruce, recently it’s been taking about 9 years for read-head technology from discovery to drive production. However, while in the past a new read-head technology would last for 10 years or more in production, nowadays they appear to be lasting only 5 to 6 years before read-head technology changes in production. Thus, it takes twice as long to develop read-head technology as its used in production, which means that the R&D expense must be amortized over a shorter timeframe. If anything, from my perspective, the production runs for new read-head technology seems to be getting shorter?!

Nonetheless, most probably, HGST and others have a new head up their sleeve but are not about to reveal it until the time comes to bring it out in a new hard drive.

Elephant in the room

But that’s not the problem. If production runs continue to shrink in duration and the cost and time of developing new heads doesn’t shrink accordingly someday the industry must eventually reach a drive capacity wall. Now this won’t be because some physical magnetic/electronic/optical constraint has been reached but because a much more fundamental, inherent economic constraint has been reached – it just costs too much to develop new read-head technology and no one company can afford it.

There are a couple of ways out of this death spiral that I see

  • Lengthen read-head production duration,
  • Decrease the cost/time to develop new read-heads
  • Create a industry wide read-head technology consortium that can track and fund future read-head development

More than likely we will need some combination of all of these solutions if the drive industry is to survive for long.