Disk capacity growing out-of-sight

A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)
A head assembly on a Seagate disk drive by Robert Scoble (cc) (from flickr)

Last week, Hitachi Global Storage Division(acquired by Western Digital, closing in 4Q2011) and Seagate announced some higher capacity disk drives for desk top applications over the past week.

Most of us in the industry have become somewhat jaded with respect to new capacity offerings. But last weeks announcements may give one pause.

Hitachi announced that they are shipping over 1TB/disk platter using 3.5″ platters shipping with 569Gb/sqin technology.  In the past 4-6 platter disk drives were available in shipped disk drives using full height, 3.5″ drives.  Given the platter capacity available now, 4-6TB drives are certainly feasible or just around the corner. Both Seagate and Samsung beat HGST to 1TB platter capacities which they announced in May of this year and began shipping in drives in June.

Speaking of 4TB drives, Seagate announced a new 4TB desktop external disk drive.  I couldn’t locate any information about the number of platters, or Gb/sqin of their technology, but 4 platters are certainly feasible and as a result, a 4TB disk drive is available today.

I don’t know about you, but 4TB disk drives for a desktop seem about as much as I could ever use. But when looking seriously at my desktop environment my CAGR for storage (revealed as fully compressed TAR files) is ~61% year over year.  At that rate, I will need a 4TB drive for backup purposes in about 7 years and if I assume a 2X compression rate then a 4TB desktop drive will be needed in ~3.5 years, (darn music, movies, photos, …).  And we are not heavy digital media consumers, others that shoot and edit their own video probably use orders of magnitude more storage.

Hard to believe, but given current trends inevitable,  a 4TB disk drive will become a necessity for us within the next 4 years.

—-

Comments?

 

 

 

 

Graphene Flash Memory

Model of graphene structure by CORE-Materials (cc) (from Flickr)
Model of graphene structure by CORE-Materials (cc) (from Flickr)

I have been thinking about writing a post on “Is Flash Dead?” for a while now.  Well at least since talking with IBM research a couple of weeks ago on their new memory technologies that they have been working on.

But then this new Technology Review article came out  discussing recent research on Graphene Flash Memory.

Problems with NAND Flash

As we have discussed before, NAND flash memory has some serious limitations as it’s shrunk below 11nm or so. For instance, write endurance plummets, memory retention times are reduced and cell-to-cell interactions increase significantly.

These issues are not that much of a problem with today’s flash at 20nm or so. But to continue to follow Moore’s law and drop the price of NAND flash on a $/Gb basis, it will need to shrink below 16nm.  At that point or soon thereafter, current NAND flash technology will no longer be viable.

Other non-NAND based non-volatile memories

That’s why IBM and others are working on different types of non-volatile storage such as PCM (phase change memory), MRAM (magnetic RAM) , FeRAM (Ferroelectric RAM) and others.  All these have the potential to improve general reliability characteristics beyond where NAND Flash is today and where it will be tomorrow as chip geometries shrink even more.

IBM seems to be betting on MRAM or racetrack memory technology because it has near DRAM performance, extremely low power and can store far more data in the same amount of space. It sort of reminds me of delay line memory where bits were stored on a wire line and read out as they passed across a read/write circuit. Only in the case of racetrack memory, the delay line is etched in a silicon circuit indentation with the read/write head implemented at the bottom of the cleft.

Graphene as the solution

Then along comes Graphene based Flash Memory.  Graphene can apparently be used as a substitute for the storage layer in a flash memory cell.  According to the report, the graphene stores data using less power and with better stability over time.  Both crucial problems with NAND flash memory as it’s shrunk below today’s geometries.  The research is being done at UCLA and is supported by Samsung, a significant manufacturer of NAND flash memory today.

Current demonstration chips are much larger than would be useful.  However, given graphene’s material characteristics, the researchers believe there should be no problem scaling it down below where NAND Flash would start exhibiting problems.  The next iteration of research will be to see if their scaling assumptions can hold when device geometry is shrunk.

The other problem is getting graphene, a new material, into current chip production.  Current materials used in chip manufacturing lines are very tightly controlled and  building hybrid graphene devices to the same level of manufacturing tolerances and control will take some effort.

So don’t look for Graphene Flash Memory to show up anytime soon. But given that 16nm chip geometries are only a couple of years out and 11nm, a couple of years beyond that, it wouldn’t surprise me to see Graphene based Flash Memory introduced in about 4 years or so.  Then again, I am no materials expert, so don’t hold me to this timeline.

 

—-

Comments?

GE Research announces new holographic media

HoloDisc1 (from http://www.gereports.com website)
HoloDisc1 (from http://www.gereports.com website)

When last we discussed holographic storage it was over the decline of the industry as a whole and what could be done about it.

Perhaps I posted too soon.  GE research just announced that they have come out with a new media formulation offering the possibility of 500GB per single disk platter and broadens the holographic storage ecosystem.

GE also mentioned that there was no need for the holographic storage to be in the form of a disk.  InPhase Technologies also had talked of other form factors besides rotating media.

Do rectangular form factors make sense?

Some of these non-disk form factors remind me of the storage cards in StarTrek or memory cards for old programmable electronic calculators.  But can they gain any traction?

The only reason a disk makes sense is that with rotating media the heads need only travel in one direction (in an arc with today’s magnetic disks, in an line with today’s CDs and DVD devices) to access a track of data.  The rotation of the platter would move all the rest of the data on a track underneath read-write heads.

With a card or any other rectangular form factor, heads and/or media would also need to travel in at least two directions to access data. Of course magnetic tape is a rectangular form factor, and today tape heads move in a single dimension (across the tape width) while the media flows under the heads in an complementary direction, linearly.

So would some form of holographic optical tape make sense. Probably, but the multiple layers needed for holographic storage will require some amount of depth to make it dense enough.  Tape’s current volumetric density may be hard to exceed substantially with this multi-layer optical media.

On the other hand, cards could be inserted into a card reader to supply one of these two directions for data access. But this may be hard to do manually at the fine grained track and/or data cell dimensions of today’s data density.  Hardware to automatically move the cards down a track of data can certainly be done it just takes technology.

Holographic disks

All that seems to show that disks probably make more sense.  The fact that with GE’s new media, holographic disk drives could read/write todays CDs, DVDs, and BlueRay disks would make it much easier to gain market traction.

With GE’s entry into holographic storage they are also possibly looking to use the technology for medical imaging.  At the densities being discussed, lots of x-rays, CAT scans, MRI scans, etc. could easily fit on a single piece of holographic media.

—-

As an industry we have been talking about Holographic storage since the early 90’s.  The promise of this technology has always been significant more data per square inch than currently available technologies.  But it has been a difficult technology to get to work properly.  There’s just a lot of technology that has to be mastered to make it happen, e.g., media, heads, page digitizers, etc.

Nevertheless, holographic storage continues onward.

Comments?