Racetrack memory gets rolling

A recent MIT study showed how new technology can be used to control and write magnetized bits in nano-structures, using voltage alone. This new technique also consumes much less power than using magnets or magnetism as well.

They envision a sort of nano-circuit, -wire or -racetrack with a series of transistor-like structures spaced at regular intervals above it.  Nano-bits would be racing around these nano-wires as a series of magnetized domains.  These new transitor-like devices would be a sort of onramp for the bits as well as stop-lights/speed limits for the racetrack.

Magnetic based racetrack memory issues

The problems with using magnets to write the bits in nano-racetrack is that magnetism casts a wide shadow and can impact adjacent race tracks, sort of like shingled writes (we last discussed in Shingled magnetic recording disks).   The other problem has been a way to (magnetically) control the speed of racing bits so they can be isolated and read or written effectively.

Magneto-ionic racetrack memory solutions

But MIT researchers have discovered a way to use voltage to change the magnetic orientation of a bit on a race track.  They also found a way through the use of voltage to precisely control the position of magnetic bits speeding around the track and to electronically isolate and select a bit.

What they have created is sort of a transistor for magnetized domains using ion-rich materials.  Voltages can be used to attract or repel those ions and then those ions can interact with flowing magnetic domains to speed up or slow down the movement of magnetic domains.

Thus, the transistor-like device can  be set to attract (or speed up) magnetized domains, slow down magnetized domains or stop them and also be used to change the magnetic orientation of a domain.  MIT researchers call these devices Magneto-ionic devices.

Racetrack memory redefined

So now we have a way to (electronically) seek to bit data on a race track,  a way to precisely (electronically) select bits on the race track, and a way to precisely (electronically) write data on a race track.  And presumably, with an appropriate (magnetic) read head, a way to read this data.  As an added bonus, apparently data once written on the racetrack requires no additional power to stay magnetized.

So the transistor-like devices are a combination of write heads, motors and brakes for the racetrack memory.  Not sure,  but if they can write, slow down and speedup magnetic domains, why can’t they read them as well that way the transistor-like devices could be a read head as well.

Why do they need more than one write-head per track. It seems to me that one should suffice for a fairly long track, not unlike disk drives. I suppose  more of them would make the track faster to write. But  they would all have to operate in tandem, speeding up or stoping the racing bits on the track all together and then starting them all back up, together again.  Maybe this way they can write a byte or a word or a chunk of data all at the same time.

In any event, it seems that race track memory took a (literally) quantum leap  forward with this new research out of MIT.

Racetrack memory futures

IBM has been talking about race track memory for some time now and this might be the last hurdle to overcome to getting there (we last discussed this in A “few exabytes-a-day” from SKA post).

In addition,  there doesn’t appear to be any write cycle, bit duration or the need for erasing whole page issues with this type of technology.  So as an underlying storage for a new sort of semi-conductor storage device (SSD) this has significant inherent advantages.

Not to mention that is all based on nano-based device sizes which means that it can pack a lot of bits in very little volume or area.  So SSDs based on these racetrack memory technologies will be denser, faster, and require less energy – could you want.

Image: Nürburgring 2012 by Juriën Minke

 

ReRAM to the rescue

I was at the Solid State Storage Symposium a couple of weeks ago where Robin Harris (StorageMojo) gave the keynote presentation. In his talk, Robin mentioned a new technology on the horizon which holds the promise of replacing DRAM, SRAM and NAND called resistive random access memory (ReRAM or RRAM).

If so, ReRAM will enter the technological race pitting MRAM, Graphene Flash, PCM and racetrack memory as followons for NAND technology.  But none of these have any intention of replacing DRAM.

Problems with NAND

There are a few problems with NAND today but the main problem that affects future NAND technologies is as devices shrink they lose endurance. For instance, today’s SLC NAND technology has an endurance of ~100K P/E (program/erase) cycles, MLC NAND can endure around 5000 P/E cycles and eMLC somewhere in between.  Newly emerging TLC (three bits/cell) has less even endurance than MLC.

But that’s all at 30nm or larger.  The belief is that as NAND feature size shrinks below 20nm its endurance will get much worse, perhaps orders of magnitude worse.

While MLC may be ok for enterprise storage today, much less than 5000 P/E cycles could become a problem and would require ever more sophistication in order to work around these limitation.    Which is why most enterprise class, MLC NAND based storage uses specialized algorithms and NAND controller functionality to support storage reliability and durability.

ReRAM solves NAND, DRAM and NvRAM problems.

Enter ReRAM, it has the potential to be faster than PCM-RAM, has smaller features than MRAM which means more bits per square inch and uses lower voltage than racetrack memory and NAND.    The other nice thing about ReRAM is that it seems readily scaleable to below 30nm feature geometries.  Also as it’s a static memory it doesn’t have to be refreshed like DRAM and thus uses less power.

In addition, it appears that  ReRAM is much more flexible than NAND or DRAM which can be designed and/or tailored to support different memory requirements.   Thus, one ReRAM design can be focused on standard  DRAM applications while another ReRAM design can be targeted at mass storage or solid state drives (SSD).

On the negative side there are still some problems with ReRAM, namely the large “sneak parasitic current” [whatever that is] that impacts adjacent bit cells and drains power.  There are a few solutions to this problem but none yet completely satisfactory.

But it’s a ways out, isn’t it?

No it’s not. BBC and Tech-On reported that Panasonic will start sampling devices soon and plan to reach volume manufacturing next year.   Elpida-Sharp  and HP-Hynix are also at work on ReRAM (or memristor) devices and expect to ship sometime in 2013.  But for the moment it appears that Panasonic is ahead of the pack.

At first, these devices will likely emerge in low power applications but as vendors ramp up development and mass production it’s unclear where it will ultimately end up.

The allure of ReRAM technology is significant in that it holds out the promise of replacing both RAM and NAND used in consumer devices as well as IT equipment with the same single technology.  If you consider that the combined current market for DRAM and NAND is over $50B, people start to notice.

~~~~

Whether ReRAM will meet all of its objectives is yet TBD.  But we seldom see any one technology which has this high a potential.  The one remaining question is why everybody else isn’t going after ReRAM as well, like Samsung, Toshiba and Intel-Micron.

I have to thank StorageMojo and the Solid State Storage Symposium team for bringing ReRAM to my attention.

[Update] @storagezilla (Mark Twomey) said that “… Micron’s aquisition of Elpida gives them a play there.”

Wasn’t aware of that but yes they are definitely in the hunt now.

Comments?

Image: Memristor by Luke Kilpatrick

 

How has IBM research changed?

20111207-204420.jpg
IBM Neuromorphic Chip (from Wired story)

What does Watson, Neuromorphic chips and race track memory have in common. They have all emerged out of IBM research labs.

I have been wondering for some time now how it is that a company known for it’s cutting edge research but lack of product breakthrough has transformed itself into an innovation machine.

There has been a sea change in the research at IBM that is behind the recent productization of tecnology.

Talking the past couple of days with various IBMers at STGs Smarter Computing Forum, I have formulate a preliminary hypothesis.

At first I heard that there was a change in the way research is reviewed for product potential. Nowadays, it almost takes a business case for research projects to be approved and funded. And the business case needs to contain a plan as to how it will eventually reach profitability for any project.

In the past it was often said that IBM invented a lot of technology but productized only a little of it. Much of their technology would emerge in other peoples products and IBM would not recieve anything for their efforts (other than some belated recognition for their research contribution).

Nowadays, its more likely that research not productized by IBM is at least licensed from them after they have patented the crucial technologies that underpin the advance. But it’s just as likely if it has something to do with IT, the project will end up as a product.

One executive at STG sees three phases to IBM research spanning the last 50 years or so.

Phase I The ivory tower:

IBM research during the Ivory Tower Era looked a lot like research universities but without the tenure of true professorships. Much of the research of this era was in materials and pure mathematics.

I suppose one example of this period was Mandlebrot and fractals. It probably had a lot of applications but little of them ended up in IBM products and mostly it advanced the theory and practice of pure mathematics/systems science.

Such research had little to do with the problems of IT or IBM’s customers. The fact that it created pretty pictures and a way of seeing nature in a different light was an advance to mankind but it didn’t have much if any of an impact to IBM’s bottom line.

Phase II Joint project teams

In IBM research’s phase II, the decision process on which research to move forward on now had people from not just IBM research but also product division people. At least now there could be a discussion across IBM’s various divisions on how the technology could enhance customer outcomes. I am certain profitability wasn’t often discussed but at least it was no longer purposefully ignored.

I suppose over time these discussions became more grounded in fact and business cases rather than just the belief in the value of the research for research sake. Technological roadmaps and projects were now looked at from how well they could impact customer outcomes and how such technology enabled new products and solutions to come to market.

Phase III Researchers and product people intermingle

The final step in IBM transformation of research involved the human element. People started moving around.

Researchers were assigned to the field and to product groups and product people were brought into the research organization. By doing this, ideas could cross fertilize, applications could be envisioned and the last finishing touches needed by new technology could be envisioned, funded and implemented. This probably led to the most productive transition of researchers into product developers.

On the flip side when researchers returned back from their multi-year product/field assignments they brought a new found appreciation of problems encountered in the real world. That combined with their in depth understanding of where technology could go helped show the path that could take research projects into new more fruitful (at least to IBM customers) arenas. This movement of people provided the final piece in grounding research in areas that could solve customer problems.

In the end, many research projects at IBM may fail but if they succeed they have the potential to make change IT as we know it.

I heard today that there were 700 to 800 projects in IBM research today if any of them have the potential we see in the products shown today like Watson in Healthcare and Neuromorphic chips, exciting times are ahead.

Graphene Flash Memory

Model of graphene structure by CORE-Materials (cc) (from Flickr)
Model of graphene structure by CORE-Materials (cc) (from Flickr)

I have been thinking about writing a post on “Is Flash Dead?” for a while now.  Well at least since talking with IBM research a couple of weeks ago on their new memory technologies that they have been working on.

But then this new Technology Review article came out  discussing recent research on Graphene Flash Memory.

Problems with NAND Flash

As we have discussed before, NAND flash memory has some serious limitations as it’s shrunk below 11nm or so. For instance, write endurance plummets, memory retention times are reduced and cell-to-cell interactions increase significantly.

These issues are not that much of a problem with today’s flash at 20nm or so. But to continue to follow Moore’s law and drop the price of NAND flash on a $/Gb basis, it will need to shrink below 16nm.  At that point or soon thereafter, current NAND flash technology will no longer be viable.

Other non-NAND based non-volatile memories

That’s why IBM and others are working on different types of non-volatile storage such as PCM (phase change memory), MRAM (magnetic RAM) , FeRAM (Ferroelectric RAM) and others.  All these have the potential to improve general reliability characteristics beyond where NAND Flash is today and where it will be tomorrow as chip geometries shrink even more.

IBM seems to be betting on MRAM or racetrack memory technology because it has near DRAM performance, extremely low power and can store far more data in the same amount of space. It sort of reminds me of delay line memory where bits were stored on a wire line and read out as they passed across a read/write circuit. Only in the case of racetrack memory, the delay line is etched in a silicon circuit indentation with the read/write head implemented at the bottom of the cleft.

Graphene as the solution

Then along comes Graphene based Flash Memory.  Graphene can apparently be used as a substitute for the storage layer in a flash memory cell.  According to the report, the graphene stores data using less power and with better stability over time.  Both crucial problems with NAND flash memory as it’s shrunk below today’s geometries.  The research is being done at UCLA and is supported by Samsung, a significant manufacturer of NAND flash memory today.

Current demonstration chips are much larger than would be useful.  However, given graphene’s material characteristics, the researchers believe there should be no problem scaling it down below where NAND Flash would start exhibiting problems.  The next iteration of research will be to see if their scaling assumptions can hold when device geometry is shrunk.

The other problem is getting graphene, a new material, into current chip production.  Current materials used in chip manufacturing lines are very tightly controlled and  building hybrid graphene devices to the same level of manufacturing tolerances and control will take some effort.

So don’t look for Graphene Flash Memory to show up anytime soon. But given that 16nm chip geometries are only a couple of years out and 11nm, a couple of years beyond that, it wouldn’t surprise me to see Graphene based Flash Memory introduced in about 4 years or so.  Then again, I am no materials expert, so don’t hold me to this timeline.

 

—-

Comments?