Research reveals ~liquid nitrogen temperature molecular magnets with 100X denser storage


Must be on a materials science binge these days. I read another article this week in Phys.org on “Major leap towards data storage at the molecular level” reporting on a Nature article “Molecular magnetic hysteresis at 60K“, where researchers from University of Manchester, led by Dr David Mills and Dr Nicholas Chilton from the School of Chemistry, have come up with a new material that provides molecular level magnetics at almost liquid nitrogen temperatures.

Previously, molecular magnets only operated at from 4 to 14K (degrees Kelvin) from research done over the last 25 years or so, but this new  research shows similar effects operating at ~60K or close to liquid nitrogen temperatures. Nitrogen freezes at 63K and boils at ~77K, and I would guess, is liquid somewhere between those temperatures.

What new material

The new material, “hexa-tert-butyldysprosocenium complex—[Dy(Cpttt)2][B(C6F5)4], with Cpttt = {C5H2tBu3-1,2,4} and tBu = C(CH3)3“, dysprosocenium for short was designed (?) by the researchers at Manchester and was shown to exhibit magnetism at the molecular level at 60K.

The storage effect is hysteresis, which is a materials ability to remember the last (magnetic/electrical/?) field it was exposed to and the magnetic field is measured in oersteds.

The researchers claim the new material provides magnetic hysteresis at a sweep level of 22 oersteds. Not sure what “sweep level of 22 oersteds” means but I assume a molecule of the material is magnetized with a field strength of 22 oersteds and retains this magnetic field over time.

Reports of disk’s death, have been greatly exaggerated

While there seems to be no end in sight for the densities of flash storage these days with 3D NAND (see my 3D NAND, how high can it go post or listen to our GBoS FMS2017 wrap-up with Jim Handy podcast), the disk industry lives on.

Disk industry researchers have been investigating HAMR, ([laser] heat assisted magnetic recording, see my Disk density hits new record … post) for some time now to increase disk storage density. But to my knowledge HAMR has not come out in any generally available disk device on the market yet. HAMR was supposed to provide the next big increase in disk storage densities.

Maybe they should be looking at CAMMR, or cold assisted magnetic molecular recording (heard it here, 1st).

According to Dr Chilton using the new material at 60K in a disk device would increase capacity by 100X. Western Digital just announced a 20TB MyBook Duo disk system for desktop storage and backup. With this new material, at 100X current densities, we could have 2PB Mybook Duo storage system on your desktop.

That should keep my ever increasing video-photo-music library in fine shape and everything else backed up for a little while longer.

Comments?

Photo Credit(s): Molecular magnetic hysteresis at 60K, Nature article

 

Nanterro emerges from stealth with CNT based NRAM

512px-Types_of_Carbon_NanotubesNanterro just came out of stealth this week and bagged $31.5M in a Series E funding round. Apparently, Nanterro has been developing a new form of non-volatile RAM (NRAM), based on Carbon Nanotubes (CNT), which seems to work like an old T-bar switch, only in the NM sphere and using CNT for the wiring.

They were founded in 2001, and are finally  ready to emerge from stealth. Nanterro already has 175+ issued patents, with another 200 patents pending. The NRAM is currently in production at 7 CMOS fabs already and they are sampling 4Mb NRAM chips  to a number of customers.

NRAM vs. NAND

Performance of the NRAM is on a par with DRAM (~100 times faster than NAND), can be configured in 3D and supports MLC (multi-bits per cell) configurations.  NRAM also supports orders of magnitude more (assume they mean writes) accesses and stores data much longer than NAND.

The only question is the capacity, with shipping NAND on the order of 200Gb, NRAM is  about 2**14X behind NAND. Nanterre claims that their CNT-NRAM CMOS process can be scaled down to <5nm. Which is one or two generations below the current NAND scale factor and assuming they can pack as many bits in the same area, should be able to compete well with NAND.They claim that their NRAM technology is capable of Terabit capacities (assumed to be at the 5nm node).

The other nice thing is that Nanterro says the new NRAM uses less power than DRAM, which means that in addition to attaining higher capacities, DRAM like access times, it will also reduce power consumption.

It seems a natural for mobile applications. The press release claims it was already tested in space and there are customers looking at the technology for automobiles. The company claims the total addressable market is ~$170B USD. Which probably includes DRAM and NAND together.

CNT in CMOS chips?

Key to Nanterro’s technology was incorporating the use of CNT in CMOS processes, so that chips can be manufactured on current fab lines. It’s probably just the start of the use of CNT in electronic chips but it’s one that could potentially pay for the technology development many times over. CNT has a number of characteristics which would be beneficial to other electronic circuitry beyond NRAM.

How quickly they can ramp the capacity up from 4Mb seems to be a significant factor. Which is no doubt, why they went out for Series E funding.

So we have another new non-volatile memory technology.On the other hand, these guys seem to be a long ways away from the lab, with something that works today and the potential to go all the way down to 5nm.

It should interesting as the other NV technologies start to emerge to see which one generates sufficient market traction to succeed in the long run. Especially as NAND doesn’t seem to be slowing down much.

Comments?

Picture Credits: Wikimedia.com

Acoustic Assisted Magnetic Recording is invented

Read an article today about Acoustic Assisted Magnetic Recording (See Oregon State University article Researchers invent “acoustic-assisted magnetic recording”).

Just like heat assisted magnetic recording (HAMR, see our Disk density hits new record… post) which uses laser beams, acoustic assisted magnetic recording (AAMR) uses ultrasound to heat up a spot on media to help it be magnetized.

Why heat up media?

The problems with the extremely dense storage coming out of the labs these days is that the bits are becoming so small that’s it’s increasingly hard to insure that bits close by aren’t being disturbed when a bit is modified. This has led to an interest in shingled writes which we discussed in Sequential only disks and Shingled magnetic recorded disks posts.

But another possibility is to add heat to the process to isolate a bit on magnetic media. In this way a heated bit will be changed while its cooler neighbors are left alone.

I was at the dental hygenist the other day and she was using a new probe which used ultrasound to break up the plaque. In this case, it was also spewing water to cool the tip.  In any event, it appears as if ultrasound can be used to heat up, break stuff and image soft tissue, pretty versatile technology.

Is AAMR better than HAMR?

The nice thing about AAMR is that it can potentially be made with all solid state electronics and as such, wouldn’t require any optical componentry like HAMR.   So in the race against HAMR this could be a crucial edge and thus, could potentially be much easier to fabricate for use in tomorrows disk drives.

I foresee some possible problems with the technology, such as what is size of the heated spot and will the ultrasound emitter need any cooling (like the dental probe).

But it all seems like a reasonable and  logical extension of HAMR technologies being developed in labs today. Also, AAMR could quite probably could make use of the same thermally activated media developed for HAMR applications. Not having to come up with a new media formulation should help it get out of the lab even quicker. That is, if its other problems can be worked out.

In the post on HAMR, it had achieved a Tb/sqin in the lab, as the new media density high watermark.  As far as I could tell from the information published on AAMR, there were no new density records being discussed. However, if AAMR is able to achieve anything close to HAMR densities, we are in for larger capacity disk drives for another decade or so.

Comments?

Photo Credit: AAMR head assembly by Oregon State University

 

Super long term archive

Read an article this past week in Scientific American about a new fused silica glass storage device from Hitachi Ltd., announced last September. The new media is recorded with lasers burning dots which represent binary one or leaving spaces which represents binary 0 onto the media.

As can be seen in the photos above, the data can readily be read by microscope which makes it pretty easy for some future civilization to read the binary data. However, knowing how to decode the binary data into pictures, documents and text is another matter entirely.

We have discussed the format problem before in our Today’s data and the 1000 year archive as well as Digital Rosetta stone vs. 3D barcodes posts. And this new technology would complete with the currently available, M-disc long term achive-able, DVD technology from Millenniata which we have also talked about before.

Semi-perpetual storage archive!!

Hitachi tested the new fused silica glass storage media at 1000C for several hours which they say indicates that it can survive several 100 million years without degradation. At this level it can provide a 300 million year storage archive (M-disc only claims 1000 years).   They are calling their new storage device, “semi-perpetual” storage.  If 100s of millions of years is semi-perpetual, I gotta wonder what perpetual storage might look like.

At CD recording density, with higher densities possible

They were able to achieve CD levels of recording density with a four layer approach. This amounted to about 40Mb/sqin.  While DVD technology is on the order of 330Mb/sqin and BlueRay is ~15Gb/sqin, but neither of these technologies claim even a million year lifetime.   Also, there is the possibility of even more layers so the 40Mb/sqin could double or quadruple potentially.

But data formats change every few years nowadays

My problem with all this is the data format issue, we will need something like a digital rosetta stone for every data format ever conceived in order to make this a practical digital storage device.

Alternatively we could plan to use it more like an analogue storage device, with something like a black and white or grey scale like photographs of  information to be retained imprinted in the media.  That way, a simple microscope could be used to see the photo image.  I suppose color photographs could be implemented using different plates per color, similar to four color magazine production processing. Texts could be handled by just taking a black and white photo of a document and printing them in the media.

According to a post I read about the size of the collection at the Library of Congress, they currently have about 3PB of digital data in their collections which in 650MB CD chunks would be about 4.6M CDs.  So if there is an intent to copy this data onto the new semi-perpetual storage media for the year 300,002012 we probably ought to start now.

Another tidbit to add to the discussion at last months Hitachi Data Systems Influencers Summit, HDS was showing off some of their recent lab work and they had an optical jukebox on display that they claimed would be used for long term archive. I get the feeling that maybe they plan to commercialize this technology soon – stay tuned for more

 

~~~~

Image: Hitachi.com website (c) 2012 Hitachi, Ltd.,

Disk density hits new record, 1Tb/sqin with HAMR

Seagate has achieved 1Tb/sqin recording (source: http://www.gizmag.com)
Seagate has achieved 1Tb/sqin recording (source: http://www.gizmag.com)

Well I thought 36TB on my Mac was going to be enough.  Then along comes Seagate with this weeks announcement of reaching 1Tb/sqin (1 Trillion bits per square inch) using their new HAMR (heat assisted magnetic recording) technology.

Current LFF drive technology runs at about 620Gb/sqin providing a  3.5″ drive capacity of around 3TB or about 500Gb/sqin for 2.5″ drives supporting ~750GB.  The new 1Tb/sqin drives will easily double these capacities.

But the exciting part is that with the new HAMR or TAR (thermally assisted recording) heads and media, the long term potential is even brighter.  This new technology should be capable of 5 to 10Tb/sqin which means 3.5″ drives of 30 to 60TB and 2.5″ drives of 10 t0 20TB.

HAMR explained

HAMR uses both lasers and magnetic heads to record data in even smaller spaces than current PMR (perpendicular magnetic recording) or vertical recording heads do today.   You may recall that PMR was introduced in 2006 and now, just 6 years later we are already seeing the next generation head and media technologies in labs.

Denser disks requires smaller bits and with smaller bits disk technology runs into three problems readability, writeability and stability, AKA the magnetic recording trilemma.  Smaller bits require better stability, but better stability makes it much harder to write or change a bits magnetic orientation.  Enter the laser in HAMR, with laser heating the bits can become much more maleable.  These warmed bits can be more easily written bypassing the stability-writeability problem, at least for now.

However, just as in any big technology transition there are other competing ideas with the potential to win out.  One possibility we have discussed previously is shingled writes using bit patterned media (see my Sequential only disk post) but this requires a rethinking/re-architecting of disk storage.  As such, at best it’s an offshoot of today’s disk technology and at worst, it’s a slight detour on the overall technology roadmap.

Of course PMR is not going away any time soon. Other vendors (and proboblf Seagate) will continue to push PMR technology as far as it can go.  After all, it’s a proven technology, inside millions of spinning disks today.  But, according to Seagate, it can achieve 1Tb/sqin but go no further.

So when can I get HAMR disks

There was no mention in the press release as to when HAMR disks would be made available to the general public, but typically the drive industry has been doubling densities every 18 to 24 months.  Assuming they continue this trend across a head/media technology transition like HAMR, we should have those 6GB hard disk drives sometime around 2014, if not sooner.

HAMR technology will likely make it’s first appearance in 72oorpm drives.  Bigger capacities seem to always first come out in slower performing disks (see my Disk trends, revisited post)

HAMR performance wasn’t discussed in the Seagate press release, but with 2Mb per linear track inch and 15Krpm disk drives, the transfer rates would seem to need to be on the order of at least 850MB/sec at the OD (outer diameter) for read data transfers.

How quickly HAMR heads can write data is another matter. The fact that the laser heats the media before the magnetic head can write it seems to call for a magnetic-plus-optical head contraption where the laser is in front of the magnetics (see picture above).

How long it takes to heat the media to enable magnetization is one critical question in write performance. But this could potential be mitigated by the strength of the laser pulse and how far the  laser has to be in front of the recording head.

With all this talk of writing, there hasn’t been lots of discussion on read heads. I guess everyone’s assuming the current PMR read heads will do the trick, with a significant speed up of course, to handle the higher linear densities.

What’s next?

As for what comes after HAMR, checkout another post I did on using lasers to magnetize (write) data (see Magnetic storage using lasers alone).  The advantage of this new “laser-only” technology was a significant speed up in transfer speeds.  It seems to me that HAMR could easily be an intermediate step on the path to laser-only recording having both laser optics and magnetic recording/reading heads in one assembly.

~~~~

Lets see 6TB in 2014, 12TB in 2016 and 24TB in 2018, maybe I won’t need that WD Thunderbolt drive string as quickly as I thought.

Comments?

 

 

NSA’s huge (YBs) new data center to turn on in 2013

 

National_Security_Agency_seal
National_Security_Agency_seal

Ran across a story in Wired about the new NSA Utah data center today which is scheduled to be operational in September of 2013.

This new data center is intended to house copies of all communications intercepted the NSA.  We have talked about this data center before and how it’s going to store YB of data (See my Yottabytes by 2015?! post).

One major problem with having a YB of communications intercepts is that you need to have multiple copies of it for protection in case of human or technical error.

Apparently, NSA has a secondary data center to backup its Utah facility in San Antonio.   That’s one copy.   We also wrote another post on protecting and indexing all this data (see my Protecting the Yottabyte Archive post)

NSA data centers

The Utah facility has enough fuel onsite to power and cool the data center for 3 days.  They have a special power station to supply the 65MW of power needed.   They have two side by side raised floor halls for servers, storage and switches, each with 25K square feet of floor space. That doesn’t include another 900K square feet of technical support and office space to secure and manage the data center.

In order to help collect and temporarily storage all this information, apparently the agency has been undergoing a data center building boom, renovating and expanding their data centers throughout the states.  The article discusses some of other NSA information collection points/data centers, in Texas, Colorado, Georgia, Hawaii, Tennessee, and of course,  Maryland.

New NSA super computers

In addition to the communication intercept storage, the article also talks about a special purpose, decrypting super computer that NSA has invented over the past decade which will also be housed in the Utah data center.  The NSA seems to have created a super powerful computer that dwarfs the current best Cray XT5 super computer clusters that operate at 1.75 petaflops available today.

I suppose what with all the encrypted traffic now being generated, NSA would need some way to decrypt this information in order to understand it.  I was under the impression that they were interested in the non-encrypted communications, but I guess NSA is even more interested in any encrypted traffic.

Decrypting old data

With all this data being stored, the thought is that the data now encrypted with unbreakable AES-128, -192 or -256 encryption will eventually become decypherable.  At that time, foriegn government and other secret communications will all be readable.

By storing this secret communications now, they can scan this treasure trove for patterns that eventually occur and once found, such patterns will ultimately lead to decrypting the data.  Now we know why they need YB of storage.

So NSA will at least know what was going on in the past.  However, how soon they can move that up to do real time decryption of communications today is another question.  But knowing the past, may help in understanding what’s going on today.

~~~~

So be careful what you say today even if it’s encrypted.  Someone (NSA and its peers around the world) will probably be listening in and someday soon, will understand every word that’s been said.

Comments?

12 atoms per bit vs 35 bits per electron

Shows 6 atom pairs in a row, with coloration of blue for interstitial space and yellow for external facets of the atom
from Technology Review Article

Read a story today in Technology Review on Magnetic Memory Miniaturized to Just 12 Atoms by a team at  IBM Research that created a (spin) magnetic “storage device” that used 12 iron atoms  to record a single bit (near absolute zero and just for a few hours).  The article said it was about 100X  denser than the previous magnetic storage record.

Holographic storage beats that

Wikipedia’s (soon to go dark for 24hrs) article on Memory Storage Density mentioned research at Stanford that in 2009 created an electronic quantum holographic device that stored 35 bits/electron using a sheet of copper atoms to record the letters S and U.

The Wikipedia article went on to equate 35bits/electron to ~3 Exabytes[10**18 bytes]/In**2.  (Although, how Wikipedia was able to convert from bits/electron to EB/in**2 I don’t know but I’ll accept it as a given)

Now an iron atom has 26 electrons and copper has 29 electrons.  If 35 bits/electron is 3 EB/in**2 (or ~30Eb/in**2), then 1 bit per 12 iron atoms (or 12*26=312 electrons) should be 0.0032bits/electron or ~275TB/in**2 (or ~2.8Pb/in**2).   Not quite to the scale of the holographic device but interesting nonetheless.

What can that do for my desktop?

Given that today’s recording head/media has demonstrated ~3.3Tb/in**2 (see our Disk drive density multiplying by 6X post), the 12 atoms per bit  is a significant advance for (spin) magnetic storage.

With today’s disk industry shipping 1TB/disk platters using ~0.6Tb/in**2 (see our Disk capacity growing out of sight post), these technologies, if implemented in a disk form factor, could store from 4.6PB to 50EB in a 3.5″ form factor storage device.

So there is a limit to (spin) magnetic storage and it’s about 11000X larger than holographic storage.   Once again holographic storage proves it can significantly store more data than magnetic storage if only it could be commercialized. (Probably a subject to cover in a future post.)

~~~~

I don’t know about you but 4.6PB drive is probably more than enough storage for my lifetime and then some.  But then again those new 4K High Definition videos, may take up a lot more space than my (low definition) DVD collection.

Comments?

 


Tape still alive, well and growing at Spectra Logic

T-Finity library at SpectraLogic's test facility (c) 2011 Silverton Consulting, All Rights Reserved
T-Finity library at SpectraLogic's test facility (c) 2011 Silverton Consulting, All Rights Reserved

Today I met with Spectra Logic execs and some of their Media and Entertainment (M&E) customers, and toured their manufacturing, test labs and briefing center.  The tour was a blast and the customers Kyle Knack from National Geographic (Nat Geo) Global Media, Toni Perez from Medcom (Panama based entertainment company) and Lee Coleman from Entertainment Tonight (ET) all talked about their use of the T-950 Spectra Logic tape libraries in the media ingest, editing and production processes.

Mr. Collins from ET spoke almost reverently about their T-950 and how it has enabled ET to access over 30 years of video interviews, movie segments and other media they can now use to put together clips on just about any entertainment subject imaginable.

He  talked specifically about the obit they did for Michael Jackson and how they were able to grab footage from an interview they did years ago and splice it together with more recent media to show a more complete story.  He also showed a piece on some early Eddie Murphy film footage and interviews they had done at the time which they used in a recent segment about his new movie.

All this was made possible by moving to digital file formats and placing digital media in their T-950 tape libraries.

Spectra Logic T-950 (I think) with TeraPack loaded in robot (c) 2011 Silverton Consulting, All Rights Reserved
Spectra Logic T-950 (I think) with TeraPack loaded in robot (c) 2011 Silverton Consulting, All Rights Reserved

Mr. Knack from Nat Geo Media said every bit of media they get anymore, automatically goes into the library archive and becomes the “original copy” of the media used in case other copies are corrupted or lost.  Nat Geo started out only putting important media in the library but found it just cost so much less to just store it in the tape archive that they decided it made more sense to just move all media to the tape library.

Typically they keep two copies in their tape library and important media is also copied to tape and shipped offsite (3 copies for this data).  They have a 4-frame T-950 with around 4000 slots and 14 drives (combination of LTO-4 and -5).  They use FC and FCoE storage for their primary storage and depend on 1000s of SATA drives for primary storage access.

He said they only use SSDs for some metadata support for their web site. He found that SATA drives can handle their big block sequential and provide consistent throughput and especially important to M&E companies consistent latency.

3D printer at Spectra Logic (for mechanical parts fabrication) (c) 2011 Silverton Consulting, All Rights Reserved
3D printer at Spectra Logic (for mechanical parts fabrication) (c) 2011 Silverton Consulting, All Rights Reserved

Mr. Perez from MedCom had much the same story. They were in the process of moving off of proprietary video tape format (Sony Betacam) to LTO media and digital files. The process is still ongoing although they are more than halfway there for current production.

They still have a lot of old media in Betacam format which will take them years to convert to digital files but they are at least starting this activity.  He said a recent move from one site to another revealed that much of the Betacam tapes were no longer readable.  Digital files on LTO tape should solve that problem for them when they finally get there.

Matt Starr Spectra Logic CTO talked about the history of tape libraries at Spectra Logic which was founded in 1998 and has been laser focused on tape data protection and tape libraries.

I find it pleasantly surprising that a company today can just supply tape libraries with software and make a ongoing concern of it. Spectra Logic must be doing something right, revenue grew 30% YoY last year and they are outgrowing their current (88K sq ft) office, lab, and manufacturing building they just moved into earlier this year and have just signed to occupy another building providing 55K sq ft of more space.

T-Series robot returning TeraPack to shelf (c) 2011 Silverton Consulting, All Rights Reserved
T-Series robot returning TeraPack to shelf (c) 2011 Silverton Consulting, All Rights Reserved

Molly Rector Spectra Logic CMO talked about the shift in the market from peta-scale (10**15 bytes) storage repositories to exa-scale (10**18 bytes) ones.  Ms. Rector believed that today’s cloud storage environments can take advantage of these large tape based, archives to provide much more economical storage for their users without suffering any performance penalty.

At lunch with Matt Starr, Fred Moore (Horison Information Strategies)Mark Peters (Enterprise Strategy Group) and I were talking about HPSS (High Performance Storage System) developed in conjunction with IBM and 5 US national labs that supports vast amounts of data residing across primary disk and tape libraries.

Matt said that there are about a dozen large HPSS sites (HPSS website shows at least 30 sites using it) that store a significant portion of the worlds 1ZB (10**21 bytes) of digital data created this past year (see my 3.3 exabytes of data a day!? post).  Later that day talking with Nathan Thompson Spectra Logic CEO, he said these large HPSS sites probably store ~10% of the worlds data, or 100EB.  I find that difficult to comprehend that much data at only ~12 sites but the national labs do have lots of data on hand.

Nowadays you can get a Spectra Logic T-Finity tape complex with 122K slot, using LTO-4/-5 or IBM TS1140 (enterprise class) tape drives.  This large a T-Finity has 4 rows of tape libraries which uses the ‘Skyway’ to transport a terapack of tape cartridges between one library row to the another.   All Spectra Logic libraries are built around a tape cartridge package they call the TeraPack which contains 10 LTO cartridges or (I think) 9-TS1140 tape cartridges (they are bigger than LTO tapes).  The TeraPack is used to import or export tapes from the library and all the tape slots in the library.

The software used to control all this is called BlueScale and is used in their T50e, a small, 50 slot library all the way up to the 122K T-Finity tape complex.  There are some changes for configuration, robotics and other personalization for each library type but the UI looks exactly the same across any of their libraries. Moreover, BlueScale offers the same enterprise level of functionality (e.g., drive and media life management) services for all Spectra Logic tape libraries.

Day 1 for SpectraPRDay closed with the lab tour and dinner.  Day 2 will start discussing futures and will be under NDA so there won’t be much to talk about right away. But from what I can see, Spectra Logic seems to be breaking down the barriers inhibiting tape use and providing tape library systems, that people almost revere.

I haven’t seen that sort of reaction about a tape library since the STK 4400 first came out last century.

—-

Comments?