Disk density hits new record, 1Tb/sqin with HAMR

Seagate has achieved 1Tb/sqin recording (source: http://www.gizmag.com)
Seagate has achieved 1Tb/sqin recording (source: http://www.gizmag.com)

Well I thought 36TB on my Mac was going to be enough.  Then along comes Seagate with this weeks announcement of reaching 1Tb/sqin (1 Trillion bits per square inch) using their new HAMR (heat assisted magnetic recording) technology.

Current LFF drive technology runs at about 620Gb/sqin providing a  3.5″ drive capacity of around 3TB or about 500Gb/sqin for 2.5″ drives supporting ~750GB.  The new 1Tb/sqin drives will easily double these capacities.

But the exciting part is that with the new HAMR or TAR (thermally assisted recording) heads and media, the long term potential is even brighter.  This new technology should be capable of 5 to 10Tb/sqin which means 3.5″ drives of 30 to 60TB and 2.5″ drives of 10 t0 20TB.

HAMR explained

HAMR uses both lasers and magnetic heads to record data in even smaller spaces than current PMR (perpendicular magnetic recording) or vertical recording heads do today.   You may recall that PMR was introduced in 2006 and now, just 6 years later we are already seeing the next generation head and media technologies in labs.

Denser disks requires smaller bits and with smaller bits disk technology runs into three problems readability, writeability and stability, AKA the magnetic recording trilemma.  Smaller bits require better stability, but better stability makes it much harder to write or change a bits magnetic orientation.  Enter the laser in HAMR, with laser heating the bits can become much more maleable.  These warmed bits can be more easily written bypassing the stability-writeability problem, at least for now.

However, just as in any big technology transition there are other competing ideas with the potential to win out.  One possibility we have discussed previously is shingled writes using bit patterned media (see my Sequential only disk post) but this requires a rethinking/re-architecting of disk storage.  As such, at best it’s an offshoot of today’s disk technology and at worst, it’s a slight detour on the overall technology roadmap.

Of course PMR is not going away any time soon. Other vendors (and proboblf Seagate) will continue to push PMR technology as far as it can go.  After all, it’s a proven technology, inside millions of spinning disks today.  But, according to Seagate, it can achieve 1Tb/sqin but go no further.

So when can I get HAMR disks

There was no mention in the press release as to when HAMR disks would be made available to the general public, but typically the drive industry has been doubling densities every 18 to 24 months.  Assuming they continue this trend across a head/media technology transition like HAMR, we should have those 6GB hard disk drives sometime around 2014, if not sooner.

HAMR technology will likely make it’s first appearance in 72oorpm drives.  Bigger capacities seem to always first come out in slower performing disks (see my Disk trends, revisited post)

HAMR performance wasn’t discussed in the Seagate press release, but with 2Mb per linear track inch and 15Krpm disk drives, the transfer rates would seem to need to be on the order of at least 850MB/sec at the OD (outer diameter) for read data transfers.

How quickly HAMR heads can write data is another matter. The fact that the laser heats the media before the magnetic head can write it seems to call for a magnetic-plus-optical head contraption where the laser is in front of the magnetics (see picture above).

How long it takes to heat the media to enable magnetization is one critical question in write performance. But this could potential be mitigated by the strength of the laser pulse and how far the  laser has to be in front of the recording head.

With all this talk of writing, there hasn’t been lots of discussion on read heads. I guess everyone’s assuming the current PMR read heads will do the trick, with a significant speed up of course, to handle the higher linear densities.

What’s next?

As for what comes after HAMR, checkout another post I did on using lasers to magnetize (write) data (see Magnetic storage using lasers alone).  The advantage of this new “laser-only” technology was a significant speed up in transfer speeds.  It seems to me that HAMR could easily be an intermediate step on the path to laser-only recording having both laser optics and magnetic recording/reading heads in one assembly.

~~~~

Lets see 6TB in 2014, 12TB in 2016 and 24TB in 2018, maybe I won’t need that WD Thunderbolt drive string as quickly as I thought.

Comments?

 

 

Magnetic storage using lasers alone

Lasers by dmuth (cc) (from Flickr)
Lasers by dmuth (cc) (from Flickr)

Read an article today on AAAS Science Now online magazine (See Hot Idea for a Faster Hard Drive) on using lasers alone to toggle magnetic moments in specially designed ferro-magnetic materials.

The disk industry has been experimenting with bit patterned media/shingled writes (see our post on Sequential Only Disk)  and thermally or heat assisted magnetic recording (TAR or HAMR) heads for some time now.  The TAR/HAMR heads use both magnetization and heat to quickly change magnetic moments in ferro-magnetic material (see our post on When will disks become extinct).

Laser’s can magnetize!!

The new study seems to be able to do away with the magnetic recording mechanism and is able to change the magnetic value with a short focused laser burst alone.   But what does this mean for the future of disk drives.

Well one thing the article highlights is that with the new technology disks can transfer data much faster than today.   Apparently magnetic recording takes a certain interval of time (1 nanosecond) per bit and getting below that threshold was previously unattainable.

That is until this new laser magnetization came along.  According to the article they can reliably change a bits magnetic value in 1/1000th of a nanosecond with heat alone.  This may enable disk data transfers a 100X faster than available today.

Seagate’s 600GB-15Krpm 15.7 Cheetah disk has an sustained data transfer rate of from 122 to 204 MB/sec (see their 15K.7 Cheetah drive data sheet).  A 100 times that we will need a much faster interface than 16Gb/s FC which probably only transfers data at ~1600 MB/sec burst which means these drives will need like 128Gb/s FC.  In addition to the data transfer speed up, with the laser pulse alone it is much more energy efficient than the HAMR heads which need both magnetics and laser.

How soon such advances will make their way into disk drives is another question.

Is today’s 15Krpm disk speed limit due to writing speeds?

I have been struck for some time now why 3.5″ disk drives never went faster than 15Krpm.  I had always surmised it was something to do with the material mechanics at the outer diameter that limited the rotational speed.

Then when drives were shrunk to 2.5″ I thought we would see some faster rotational speed, but it never happened.  Perhaps magnetic write speeds are the problem.   At the 204MB/sec we are reading bits in under a nanosecond but write sustained data transfer is another question.  Maybe there will be a 22Krpm disk in my future?

Fixed head disks déjà vu

Ok now that that’s settled we need to work on speeding up seek times.  I  could see some sort of a rotating diffraction grating or diffraction comb taking the laser and splitting it up into multiple beams to cover each track at almost the same time, sort of like a fixed head disk of old (see IBM 2305).  This would allow disks to write seek to any track in microseconds rather than milliseconds and write data in picoseconds rather than nanoseconds.

How to do something like this for reading data off a track is yet another question.  It’s too bad we couldn’t use the laser alone to read the magnetic information as well as write it.

If you could do that and use a similar diffraction grating/comb for reading data, one could conceivably create a cost effective, competitive solution to the performance of SSD technology.  And that would be very interesting device indeed!

Comments?

 

 

Disk drive density multiplying by 6X

Sodium Chloride by amandabhslater (cc) (From Flickr)
Sodium Chloride by amandabhslater (cc) (From Flickr)

In a news story out of Singapore Institute of Materials Research and Engineering (IMRE), Dr. Joel Yang has demonstrated 6X the current density on disk platter media, or up to 3.3 Terabits /square inch (Tb/sqin). And it all happens due to salt (sodium chloride) crystals.

I have previously discussed some of the problems encountered by the disk industry going to the next technology transition trying to continue current density trends.  At the time, the then best solution was to use bit-patterned media (BPM) and shingled writes discussed in my Sequential Only Disk!? and Disk trends, revisited posts.  However, this may have been premature.

Just add salt

It turns out that by adding salt to the lithographic process used to disperse magnetic particles onto disk platters for BPM, the particles are more regularly spaced. In contrast, todays process used in current disk media manufacturing, causes the particles to be randomly spaced.

More regular magnetic particle spacing on media provides two immediate benefits for disk density:

  • More particles can be packed in the same area. With increased magnetic particles located in a square inch of media, more data can be recorded.
  • Bigger particles can be used for recording data. With larger grains, data can be recorded using a single structure rather than using multiple, smaller particles, increasing density yet again.

Combining these two attributes increases disk platter capacities by a factor of 6 without having to alter read-write head technology.  The IMRE team demonstrated 1.9Tb/sqin recording capacity but fabricated media with particles at levels that could provide 3.3Tb/sqin.  Currently, the disk industry is demonstrating 0.5Tb/sqin.

Other changes needed

I suppose other changes will also be needed to accommodate the increased capacity, not the least of which is speeding up the read-write channels to support 6X more bits being accessed per revolution.  Probably other items need to be changed as well,  but these all come with increased disk density.

Before this technique came along the next density levels was turning out to be a significant issue. But now that salt is in use, we can all rest easy knowing that disk capacity trends can continue to increase with todays recording head technology.

Using the recent 4TB 7200RPM hard drives (see my Disk capacity growing out-of-sight post), but moving to salt and BPM, the industry could potentially create a 24TB 7200RPM drive or for the high performance 600GB 15KRPM drives, 3.6TB high performance disks!  Gosh, not to long ago 24TB of storage was a good size storage system for SMB shops, with this technology, it’s just a single disk drive.

—-

Comments?

Disk trends, revisited

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

An interesting guest post on Claus’s Blog (Claus Mikkelsen of HDS) by Ian Vogelesang of HGST provided some technical/economic insights on why specific disk drives are more economically feasible than others.

It’s a bit hard going and more technical than a typical blog post, but it certainly makes a number of interesting points.

  1. There is an interaction between recording density, performance and $/GB when introducing a new, smaller form factor.  Most often drive vendors are trying to maximize GB per drive while IO performance is not as much of a concern.  So they most often try to first come out with their densest drive they can in any new form factor.  I think this is what we have seen with the SFF disks today, i.e., most vendors came out with 10Krpm drives, leaving their faster drives to the LFF.  As recording density for a new technology continues to improve, GB/drive is no longer the driving factor which is when performance rises to the top. At that point then we see the introduction of higher speed drives in a form factor.
  2. Enterprise SATA drives perform worse than equivalent capacity SAS drives. In HDS’s case there are two reasons for this: 1) For enterprise storage they append ECC plus other LBA integrity checks to each 512 byte block however, SATA doesn’t support anything but 2**n block size, thus multiple IOs are required to read/validate a block and 2) SAS hardware supports a larger tag command queue than SATA and thus, a better optimized IO queue for multiple IO requests.
  3. Global access density requirements are 600IOPs/TB of storage. This is stated as a matter of fact in the post without any background information but is another key factor driving disk changes.

I would love to know more about that last point 600 IOPS/TB. But there wasn’t much else there.  (It seem to me this should have changed over time. It’s certainly worthy of a research study if anybody’s listening out there.)

Shingled writes

One other thing I found interesting is a few statements at the end regarding emerging disk recording technology.  It seems thermally assisted recording (TAR) is not coming along as fast as everyone in the industry thought it would.  As such, the disk industry is considering moving to shingled writes (see my post Sequential Only Disk) which may cause them to abandon random writes.

But there is another solution to non-random writes besides sequential only disk and that is implementing a log structured file for blocks on the disk.  Similar to NetApp’s Data ONTAP, where the system supports random writes but actually writes data on disk drives sequentially.

This requires more smarts in the drive controller but it’s nothing like what’s in SSDs today for wear leveling and is a viable alternative.  The nice thing about a log structured file on disk, is that there is no need to change any IO drivers as the disk drive continues to support random writes (from the server/storage system perspective) but the drives write sequential on the platter.

I would suspect most drive vendors considering shingled writes are busily working on doing something similar to this and it wouldn’t surprise me to see the next generation disks support shingled writes using an onboard log structured file.

What this will do for read sequential IO is another question entirely.

Luckily, data that is read sequentially is often written sequentially and even with a log structured file layout on disk, will more than likely be positioned close together on a disk platter.

—-

Comments?

 

Sequential only disk?!

St Sadurni d'Anoia - Cordoniu Grid - Shoes on Wires by Shoes on Wires (from flickr) (cc)
St Sadurni d'Anoia - Cordoniu Grid - Shoes on Wires by Shoes on Wires (from flickr) (cc)

Was at a Rocky Mountain Magnetics (IEEE) seminar a couple of weeks ago and a fellow from HDS GST was discussing recent advances in bit patterned media (BPM).  They had shown some success at 45nm by 45 nm bit cells which corresponded to about 380Gb/sqin a little less than current technology is capable of without BPM.  The session was on some of the methodology used to create BPM, some of the magnetic characteristics and parameters that BPM is capable of and some other aspects of the “challenges” inherent in moving to BPM.   I have written before on some of the challenges inherent in the the coming hard drive capacity wall.

But one thing that caught my interest was that even at the 45x45nm spacing, they were forced to use shingled writes to modify the bit cells.  Apparently today’s read-write heads are bigger than 45x45nm in at least width dimension.  Thus, they were forced to write two tracks at a time and then go back and re-write the 2nd (and 3rd) track on the next pass, and then the 3rd and 4th track, etc.  In this fashion they shingle wrote the whole media sample.

This seems to imply to me that the only way BPM can be written with todays head technology is sequentially.  What would this mean to the world of data processing.  There are already other media today that only support sequential, i.e, tape and optical.  And yet one significant advantage of disk at least in the past was that they could support random writes.

Today’s disk, at least SATA, high capacity disk, is already taking over from tape in the first tier backup solution. Any sequential only disk with even higher capacities would be a likely future revision of the current SATA disks in this application.

However there is more to data processing than purely backup.

How would we use a sequential only disk device?

Perhaps this would be an opening to support a hybrid disk like device, one that could support a limited amount of randomly written data while supporting a vast sequential address space.  This sounds like a new device architecture which would take some time to support but it’s not that different from data base and file system structures that exist today.

For file systems, file data is written sequentially through an contiguous sequence of blocks.  File meta-data, e.g., directory entries with file name, date, location, etc. is written randomly.

Database systems are a bit more complex.  Yes there are indexes similar to file meta-data above and tables are typically created sequentially.  But, table data can also be updated randomly.  It might take some effort to change this to be purely sequentially updated but that’s what would be needed to support such a sequential only disk.

Time for hybrid disks to re-appear

A couple of years back, when SSDs were expensive and relatively un-known, there was a version of disks for PCs and laptops that supported a relatively small amount of SSD and a large disk in a single 3.5″ form factor.  This was known as a hybrid disk and had some of the performance of pure SSD with the economics of disk.

Now BPM combined with SSD’s could be configured as a similar device with the SSDs portion supporting the random written data with the BPM disk supporting the sequentially written data.  But one difference between the old hybrid disk and this one is that the random data would only (maybe) exist in the SSD storage alone.

However, with BPM its possible that a portion (maybe a zone or two) of the disk surface could be BPM and a portion using (non-BPM) current technology.  This could also be done on a platter surface by surface basis if doing so on the same platter was too complex.  But such a device would also support hybrid random and sequential write operations without the need for NAND flash.

In any event, this is all relatively new and depends on the relative sizes of write heads and BPM bit cells.  But in order to get to 4Tb/sqin or higher technologists are talking BPM bit cells of 12nm by 12nm.  At that size shingled writes with todays head size would write span 8 or 9 tracks at a time.  Even taking current write head dimensions down by a factor of 5 would still leave one with a dual track width head.

One technique to reduce write size is to use thermally assisted recording (TAR) heads. This involves using a focused laser to heat up a single bit cell for writing.  The laser beam can be focused much smaller than the write head and could be used to isolate the writing to a single track.  Of course TAR heads are yet another new technology that would then have to be integrated into the new disk package.  But maybe this is the way to get back to a truly randomly written disk device.

Who knows this is all new technology and what’s published may not be a true representation of what’s available in the labs. But to get beyond todays capacity limitations there may be a new storage technology architecture on our horizon…