Two dimensional magnetic recording (TDMR)

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

I attended a Rocky Mountain IEEE Magnetics Society meeting a couple of weeks ago where Jonathan Coker, HGST’s Chief Architect and an IEEE Magnetics Society Distinguished Lecturer was discussing HGST’s research into TDMR heads.

It seems that disk track density is getting so high, track pitch is becoming so small, that the magnetic read heads have become wider than the actual data track width.  Because of this, read heads are starting to pick up more inter-track noise and it’s getting more difficult to obtain a decent signal to noise ratio (SNR) off of a high-density disk platter with a single read head.

TDMR read heads can be used to counteract this extraneous noise by using multiple read heads per data track and as such, help to create a better signal to noise ratio during read back.

What are TDMR heads?

TDMR heads are any configuration of multiple read heads used in reading a single data track. There seemed to be two popular configurations of HGST’s TDMR heads:

  • In-series, where one head is directly behind another head. This provides double the signal for the same (relative) amount of random (electronic) noise.
  • In-parallel (side by side), where three heads were configured in-parallel across the data track and the two inter-track bands. That is, one head was configured directly over the data track with portions spanning the inter-track gap to each side, one head was half way across the data track and the next higher track, and a third head was placed half way across the data track and the next lower track.

At first, the in-series configuration seemed to make the most sense to me. You could conceivably average the two signals coming off the heads and be able to filter out the random noise.  However, the “random noise” seemed to be mostly coming from the inter-track zone and this wasn’t as much random electronics noise as random magnetic noise, coming off of the disk platter, between the data tracks.

In-parallel wins the SNR race

So, much of the discussion was on the in-parallel configuration. The researcher had a number of simulated magnetic recordings which were then read by simulated, in parallel, tripartite read heads.  The idea here was that the information read from each of the side band heads that included inter-track noise could be used as noise information to filter the middle head’s data track reading. In this way they could effectively increase the SNR across the three signals, and thus, get a better data signal from the data track.

Originally, TDMR was going to be the technology that was needed to get the disk industry to 100Tb/sqin. But, what they are finding at HGST and elsewhere, is even today, at “only” ~5Tb/sqin (HGST helium drives), there seems to be an increasing need to help reduce noise coming from read heads.

Disk density increase has been slowing lately but is still on a march to double density every 2 years or so. As such,  1TB platter today will be a 2TB platter in 2 years and a4TB platter in 4 years, etc. TDMR heads may be just the thing that gets the industry to that 4TB platter (20Tb/sqin) in 4 years.

The only problem is what’s going to get them to 100Tb/sqin now?

Comments?

 

The coming hard drive capacity wall?

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
Hard drives have been on a capacity tear lately what with perpendicular magnetic recording and tunneling magnetoresistive heads. As evidence of this, Seagate just announced their latest Barracuda XT, a 2TB hard drive with 4 platters with ~500GB/platter at 368Gb/sqin recording density.

Read-head technology limits

Recently, I was at a Rocky Mountain IEEE Magnetics Society seminar where Bruce Gurney, Ph. D., from Hitachi Global Storage Technologies (HGST) said there was no viable read head technology to support anything beyond 1Tb/sqin recording densities. Now in all fairness this was a public lecture and not under NDA but it’s obvious the (read head) squeeze is on.

Assuming, it’s a relatively safe bet that densities achieving ~1Tb/sqin can be attained with today’s technology, that means another ~3X in drive capacity is achievable using current read-heads. Hence, for a 4 platter configuration, we can expect a ~6TB drive in the near future using today’s read-heads.

But what happens next?

Unclear to me how quickly drive vendor deliver capacity increases these days, but in recent past a doubling in capacity occurred every 18-24 months. Assuming this holds for today’s technology, somewhere in the next 24-36 months or so we will see a major transition to a new read-head technology.

Thus, for today’s drive industry must spend major R&D $’s to discover, develop, and engineer a brand new read-head technology to take drive capacity to the next chapter. What this means for write heads, media smoothness, and head flying height is unknown.

Read-head development

At the seminar mentioned above, Bruce had some interesting charts which showed how long previous read-head technology took to develop and how long they were used to produce drives. According to Bruce, recently it’s been taking about 9 years for read-head technology from discovery to drive production. However, while in the past a new read-head technology would last for 10 years or more in production, nowadays they appear to be lasting only 5 to 6 years before read-head technology changes in production. Thus, it takes twice as long to develop read-head technology as its used in production, which means that the R&D expense must be amortized over a shorter timeframe. If anything, from my perspective, the production runs for new read-head technology seems to be getting shorter?!

Nonetheless, most probably, HGST and others have a new head up their sleeve but are not about to reveal it until the time comes to bring it out in a new hard drive.

Elephant in the room

But that’s not the problem. If production runs continue to shrink in duration and the cost and time of developing new heads doesn’t shrink accordingly someday the industry must eventually reach a drive capacity wall. Now this won’t be because some physical magnetic/electronic/optical constraint has been reached but because a much more fundamental, inherent economic constraint has been reached – it just costs too much to develop new read-head technology and no one company can afford it.

There are a couple of ways out of this death spiral that I see

  • Lengthen read-head production duration,
  • Decrease the cost/time to develop new read-heads
  • Create a industry wide read-head technology consortium that can track and fund future read-head development

More than likely we will need some combination of all of these solutions if the drive industry is to survive for long.