Hard drives have been on a capacity tear lately what with perpendicular magnetic recording and tunneling magnetoresistive heads. As evidence of this, Seagate just announced their latest Barracuda XT, a 2TB hard drive with 4 platters with ~500GB/platter at 368Gb/sqin recording density.
Read-head technology limits
Recently, I was at a Rocky Mountain IEEE Magnetics Society seminar where Bruce Gurney, Ph. D., from Hitachi Global Storage Technologies (HGST) said there was no viable read head technology to support anything beyond 1Tb/sqin recording densities. Now in all fairness this was a public lecture and not under NDA but it’s obvious the (read head) squeeze is on.
Assuming, it’s a relatively safe bet that densities achieving ~1Tb/sqin can be attained with today’s technology, that means another ~3X in drive capacity is achievable using current read-heads. Hence, for a 4 platter configuration, we can expect a ~6TB drive in the near future using today’s read-heads.
But what happens next?
Unclear to me how quickly drive vendor deliver capacity increases these days, but in recent past a doubling in capacity occurred every 18-24 months. Assuming this holds for today’s technology, somewhere in the next 24-36 months or so we will see a major transition to a new read-head technology.
Thus, for today’s drive industry must spend major R&D $’s to discover, develop, and engineer a brand new read-head technology to take drive capacity to the next chapter. What this means for write heads, media smoothness, and head flying height is unknown.
At the seminar mentioned above, Bruce had some interesting charts which showed how long previous read-head technology took to develop and how long they were used to produce drives. According to Bruce, recently it’s been taking about 9 years for read-head technology from discovery to drive production. However, while in the past a new read-head technology would last for 10 years or more in production, nowadays they appear to be lasting only 5 to 6 years before read-head technology changes in production. Thus, it takes twice as long to develop read-head technology as its used in production, which means that the R&D expense must be amortized over a shorter timeframe. If anything, from my perspective, the production runs for new read-head technology seems to be getting shorter?!
Nonetheless, most probably, HGST and others have a new head up their sleeve but are not about to reveal it until the time comes to bring it out in a new hard drive.
Elephant in the room
But that’s not the problem. If production runs continue to shrink in duration and the cost and time of developing new heads doesn’t shrink accordingly someday the industry must eventually reach a drive capacity wall. Now this won’t be because some physical magnetic/electronic/optical constraint has been reached but because a much more fundamental, inherent economic constraint has been reached – it just costs too much to develop new read-head technology and no one company can afford it.
There are a couple of ways out of this death spiral that I see
- Lengthen read-head production duration,
- Decrease the cost/time to develop new read-heads
- Create a industry wide read-head technology consortium that can track and fund future read-head development
More than likely we will need some combination of all of these solutions if the drive industry is to survive for long.
2 thoughts on “The coming hard drive capacity wall?”
Doesn’t matter. Who’s going to buy a drive that large? Not enterprise customers; 1TB drive RAID rebuild times are already measured in hours, during which you can have another drive failure. You end up scaling out using additional parity stripes and having hot spares so that you can start rebuild right away, but that doesn’t scale as drive sizes increase. (Hence the slow adoption of 2TB drives in the enterprise.)
It’ll be interesting to see what space these large drives occupy in the enterprise. What really needs to be increased isn’t read head density but practical read/write and seek speed to get the array rebuild times down.
There are other options to RAID5-6. Some of these can speed up rebuild rates. Let’s not forget that RAID was originally intended to help small disks compete with the reliability of large (expensive) disks. More reliability from your disk drives could also minimize the time spent in rebuilding….
Who uses RAID5/6 with SATA drives to store anything important? I think those types of RAID will go out of fasion – check out the IBM XIV for instance.
Thinker, So XIV uses RAID 1 protection. Does that mean that RAID5/6 couldn’t be used, I don’t think so. In my mind, Raid 1 provides a bit faster rebuild time but it costs you 50% of your available storage to get there. Can it be done with less, I believe so, and the techniques I discuss in my Are RAID’s days numbered post can also help.
Comments are closed.