Photonic or Optical FPGAs on the horizon

Read an article this past week (Toward an optical FPGA – programable silicon photonics circuits) on a new technology that could underpin optical  FPGAs. The technology is based on implantable wave guides and uses silicon on insulator technology which is compatible with current chip fabrication.

How does the Optical FPGA work

Their Optical FPGA is based on an eraseable direct coupler (DC) built using GE (Germanium) ion implantation. A DC is used when two optical waveguides are placed close enough together such that optical energy (photons) on one wave guide is switched over to the other, nearby wave guide.

As can be seen in the figure, the red (eraseable, implantable) and blue (conventional) wave guides are fabricated on the FPGA. The red wave guide performs the function of DC between the two conventional wave guides. The diagram shows both a single stage and a dual stage DC.

By using imlantable (eraseable) DCs, one can change the path of a photonic circuit by just erasing the implantable wave guide(s).

The GE ion implantable wave guides are erased by passing a laser over it and thus annealing (melting) it.

Once erased, the implantable wave guide DC no longer works. The chart on the left of the figure above shows how long the implantable wave guide needs to be to work. As shown above once erased to be shorter than 4-5µm, it no longer acts as a DC.

It’s not clear how one directs the laser to the proper place on the Optical FPGA to anneal the implantable wave guide but that’s a question of servos and mirrors.

Previous attempts at optical FPGAs, required applying continuing voltage to maintain the switched photonics circuits. Once voltage was withdrawn the photonics reverted back to original configuration.

But once an implantable wave guide is erased (annealed) in their approach, the changes to the Optical FPGA are permanent.

FPGAs today

Electronic FPGAs have never gone out of favor with customers doing hardware innovation. By supplying Optical FPGAs, the techniques in the paper would allow for much more photonics innovation as well.

Optics are primarily used in communications and storage (CD-DVDs) today. But quantum computing could potentially use photonics and there’s been talk of a 100% optical computer for a long time. As more and more photonics circuitry comes online, the need for an optical FPGA grows. The fact that it’s able to be grown on today’s fab lines makes it even more appealing.

But an FPGA is more than just directional control over (electronic or photonic) energy. One needs to have other circuitry in place on the FPGA for it to do work.

For example, if this were an electronic FPGA, gates, adders, muxes, etc. would all be somewhere on the FPGA

However, once having placed additional optical componentry on the FPGA, photonic directional control would be the glue that makes the Optical FPGA programmable.

Comments?

Photo Credit(s): All photos from Toward an optical FPGA – programable silicon photonics circuits paper

 

5D storage for humanity’s archive

5D data storage.jpg_SIA_JPG_fit_to_width_INLINEA group of researchers at the University of Southhampton in the UK have  invented a new type of optical recording, based on femto-second laser pulses and silica/quartz media that can store up to 300TB per (1″ diameter) disc platter with thermal stability at up to 1000°C or a media life of up to 13.8B years at room temperature (190°C?). The claim is that the memory device could outlive humanity and maybe the universe.

The new media/recording technique was used recently to create copies of text files (Holy Bible, pictured above). Other significant humanitarian, political and scientific treatise have also been stored on the new media. The new device has been nicknamed “Superman Memory Crystal”, due to the memory glass (quartz) likeness to Superman’s memory crystals.

We have written before on long term archives(See Super Long Term Archive and Today’s data and the 1000 year archive posts) but this one beats them all by many orders of magnitude.
Continue reading “5D storage for humanity’s archive”

Interesting sessions at SNIA DSI Conference 2015

I attended the SNIA Data Storage Innovation (DSI) Conference in Santa Clara, CA last week and ran into a number of old friends and met a few new ones. While attending the conference, there were a few sessions that seemed to bring the conference to life for me.

Microsoft Software Defined Storage Journey

Jose Barreto, Principal Program Manager – Microsoft, spent a little time on what’s currently shipping with Scale-out File Service, Storage Spaces and other storage components of Windows software defined storage solutions. Essentially, what Microsoft is learning from Azure cloud deployments it is slowly but surely being implemented in Windows Server software and other solutions.

Microsoft ‘s vision is that customers can have their own private cloud storage with partner storage systems (SAN & NAS), with Microsoft SDS (Scale-out File Server with Storage Spaces), with hybrid cloud storage (StorSimple with Azure storage) and public cloud storage (Azure storage).

Jose also mentioned other recent innovations like the Cloud Platform System using Microsoft software, Dell compute, Force 10 networking and JBOD (PowerVault MD3060e) storage in a rack.

Some recent Microsoft SDS innovations include:

  • HDD and SSD storage tiering;
  • Shared volume storage;
  • System Center volume and unified storage management;
  • PowerShell integration;
  • Multi-layer redundancy across nodes, disk enclosures, and disk devices; and
  • Independent scale-out of compute or storage.

Probably a few more I’m missing here but these will suffice.

Then, Jose broke some news on what’s coming next in Windows Server storage offerings:

  • Quality of service (QoS) – Windows Server provides QoS capabilities which allows one to limit the IO activity and can be used to specify min and max IOPS or latency at a VM or VHD level. The scale-out storage service will balance the IO activity across the cluster to meet this QoS specification. Apparently the balancing algorithm came from Microsoft Research but Jose didn’t go into great detail on what it did differently other than being “fairer” applying QoS constraints.
  • Rolling upgrades – Windows Server now supports a cluster running different versions of software. Now one can take a cluster node down and update its software and re-activate it into the same cluster. Previously, code upgrades had to take a whole cluster down at a time.
  • Synchronous replication – Windows Server now supports synchronous Storage Replicast the volume level. Previously Storage Replicas were limited to asynch.
  • Higher VM storage resiliency – Windows will now pause a VM rather than terminate it during transient storage interruptions. This allows VMs to sustain operations across transient outages. VMs are in PausedCritical state until the storage comes back and then they are restarted automatically.
  • Shared-nothing Storage Spaces – Windows Storage Spaces can be configured across cluster nodes without shared storage. Previously, Storage Spaces required shared JBOD storage between cluster nodes. This feature removes this configuration constraint and allows JBOD storage to only be accessible fro a single node.

Jose did not name what this  “Vnext” was going to be called and didn’t provide a specific time frame other than it’s coming out shortly.

Archival Disc Technology

Yasumori  Hino from Panasonic and Jun Nakano from Sony presented information on a brand new removable media technology or Cold Storage. Previous to there session there was another one from HDS Federal Corporation on their BluRay jukebox but Yasumori’s and Jun’s session was more noteworthy.The  new Archive Disc is the next iteration in optical storage beyond BlueRay and targeted at long term archive or “cold” storage.

As a prelude to the Archive Disc discussion Yasumori played a CD that was pressed in 1982 (52nd Street, Billy Joel album) on his current generation laptop to show the inherent downward compatibility in optical disc technology.

In 1980 IBM 3480 disk drives were refrigerator sized, multi $10K devices, and held 2.3GB. As far as I know there aren’t any of these still in operation. And IBM/STK tape was reel to reel and took up a whole rack. There may be a few of these devices still operating these days but not many.  I still have a CD collection (but then I am a GreyBeard 🙂 that I still listen to occasionally.

IMG_4399The new Archive Disc includes:

  • More resilient media to high humidity, high temperature, salt water, and EMP and other magnetic disturbances. As proof, a BlueRay disk was submerged in sea water for 5 weeks and was still able to be read. Data on BlueRay and the new Archive disk is recorded without using electro magnetics and is recorded in a very stable oxide recording material layer. They project that the new Archive disc has a media life of 50 years at 50C and 1000 years at 25C under high humidity conditions.
  • Dual sided, triple layered which uses land and groove recording to provide 300GB of data storage. BlueRay also uses a land and groove disk format but only records on the land portion of the disc. Track pitch for BlueRay is 320nm whereas for the Archive disc it’s only 225nm.
  • Data transfer speeds of 90MB/sec with two optical heads, one per side. Each head can read/write data at 45MB/sec. They project double or quadrouple this data transfer rate by using more pairs of optical heads .

They also presented a roadmap for a 2nd gen 500GB and 3rd gen 1TB Archive disc using higher linear density changes and better signal processing technology.

Cold storage is starting to get some more interest these days what with all the power consumption going into today’s data centers and the never ending data tsunami. Archive and BluRay optical storage consume no power at rest and only consume power when mounting/dismounting and reading/writing/spinning. Also with optical discs imperviousness to high temp and humidity, optical storage could be stored outside of air conditioned data centers.

The Storage Revolution

The final presentation of interest to me was by Andrea Nelson from Intel. Intel has lately been focusing on helping partners and vendors provide more effective storage offerings. These aren’t storage solutions but rather storage hardware, components and software developed in collaboration with storage vendors and partners that make it easier for them to offer storage solutions using Intel hardware. One example of this collaboration is IBM hardware assist Real Time Compression available on new V7000 and FlashSystem V9000 storage hardware.

As the world turns to software defined storage, Intel wants those solutions to make use of their hardware. (Although, at the show I heard from one another new SDS vendor that was planning to use X86 as well as ARM servers).

Intel has:

  • QuickAssist Acceleration technology – such as hardware assist data compression,
  • Storage Acceleration software libraries – open source erasure coding and other low-level compute intensive functions, and
  • Cache Acceleration software – uses Intel SSDs as a data cache for storage applications.

There wasn’t of a technical description of these capabilities as in other DSI sessions but with the industry moving more and more to SDS, Intel’s got a vested interest in seeing it be implemented on their hardware.

~~~~

That’s about it. I sat in on quite a number of other sessions but nothing else stuck out as significant or interesting to me as these threes sessions.

Comments?

Holograms, not just for storage anymore

A recent article I read (Holograms put storage capacity in a spin) discusses a novel approach to holographic data storage, this time using magnetic spin waves to encode holographic information on magnetic memory.

It turns out holograms can be made with any wave like phenomena and optical holograms aren’t the only way to go. Magnetic (spin?) waves can also be used to create and read holograms.

These holograms are made in magnetic semiconductor material rather than photographic material. And because the wave nature of magnetic spin operates at a lower frequency than optics there is the potential for even greater densities than corresponding optical holographic storage.

A new memory emerges

The device is called a Magnonic Holographic Memory and it seems to work by applying spin waves through a magnetic substrate and reading (sensing) the resulting interference patterns below the device.

According to the paper, the device is theoretically capable of reading the magnetic (spin) state of hundreds of thousands of nano-magnetic bits in parallel. (Let’s see, that would be about 100KB of information in parallel). Which must have something to do with the holographic nature of the read out I would guess.

I haven’t the foggiest notion how all this works but it seems to be a fallout of some earlier spintronics work the researchers were doing. The paper showed a set of three holograms read out of  grid. And the prototype device seems to require a grid (almost core like) of magnetic material on top of the substrate which is the write head. Not clear if there was a duplicate of this grid below the material to read the spin waves but something had to be there.

The researchers indicated some future directions to shrink the device, primarily by shrinking what appears to be the write head and maybe the read headseven further. It’s also not clear what the magnetic substrate which is being read/written to and whether that can be shrunk any further.

The researchers said that although spin wave holographics cannot compete with the optical holographic storage in terms of propagation delays and seems to be noisier, spin wave holographics do appear to be much more appropriate for NM scale direct integration with electronic circuits.

Is this new generation of solid state storage?

Photo Credits: Spinning Top by RusselStreet

Optical discs for Facebook cold storage

I heard last week that Facebook is implementing Blu Ray libraries for cold storage. Each BluRay disk holds ~100GB and they figure they can store 10,000 discs or ~1PB in a rack.

They bundle 12 discs in a cartridge and 36 cartridges in a magazine, placing 24 magazines in a cabinet, with BluRay drives and a robotic arm. The robot arm sits in the middle of the cabinet with the magazines/cartridges located on each side.

It’s unclear what Amazon Glacier uses for its storage but a retrieval time of 3-5 hours indicates removable media of some type.  I haven’t seen anything on Windows Azure offering a similar service but Google has released Durable Reduced Availability (DRA) storage which could potentially be hosted on removable media as well.  I was unable to find any access times specifications for Google DRA.

Why the interest in cold storage?

The article mentioned that Facebook is testing the new technology first on its compliance data. After that Facebook will start using it for cold photo storage. Facebook also said that it will be using different storage technologies for it’s cold storage repository mentioning “bad flash” as another alternative.

BluRay supports both a re-writeable as well as WORM (write once, read many times) technology. As such, WORM discs cannot be modified, only destroyed.  WORM technology would be very useful for anyone’s compliance data. The rewritable Blu Ray discs might be more effective for cold photo storage, however the fact that people on Facebook rarely delete photos, says WORM would work well here too.

100GB is a pretty small storage bucket these days but for compliance documents, such as email, invoices, contracts, etc. it’s plenty large.

Can Blu Ray optical provide data center cold storage?

Facebook didn’t discuss the specs on the robot arm that they were planning to use but with 10K cartridges it has a lot of work to do. Tape library robots move a single cartridge in about 11 seconds or so. If the optical robot could do as well (no information to the contrary) one robot arm could support ~4K disc moves per day. But that would be enterprise class robotics and 100% duty cycle, more likely 1/2 to 1/4 of this would be considered good for an off the shelf system like this. So maybe a 1000 to 2000 disc picks per day.

If we use 22 seconds per disc swap (two disc moves), a single robot/rack could support a maximum of 100 to 200TB of data writes per day (assuming robot speed was the only bottleneck).  In the video (see about 30 minutes in) the robot didn’t look all that fast as compared to a tape library robot, but maybe I am biased.

Near as I can tell a 12x BluRay drive can write at ~35MB/sec (SATA drive, writing single layer, 25GB disc, we assume this can be sustained for a 4-layer or dual-sided 2-layer 100GB disc). So to be able to write a full 100GB disk would take ~48 minutes and if you add to that the 22 seconds of disc swap time, one SATA drive running 100% flat out could maybe write 30 discs per day or ~3TB/day.

In the video, the BluRay drives appear to be located in an area above the disc magazines along each side. There appears to be two drives per column with 6 columns per side, so a maximum of 24 drives. With 24 drives, one rack could write about 72TB/day or 720 discs per day which would fit into our 22 seconds per swap.  At 72TB/day it’s going to take ~14 days to fill up a cabinet. I could be off on the drive count, they didn’t show the whole cabinet in the video, so it’s possible they have 12 columns per side, 48 drives per cabinet and 144TB/day.

All this assumes 100% duty cycle on the drives which is unreasonable for an enterprise class tape drive let alone a consumer class BluRay drive. This is also write speed, I assume that read speed is the same or better. Also, I didn’t see any servers in the cabinet and I assume that something has to be reading, writing and controlling the optical library. So these other servers need to be somewhere close by, but they could easily be located in a separate rack somewhere near to the library.

So it all makes some amount of sense from a system throughput perspective. Given what we know about the drive speed, cartridge capacity and robot capabilities, it’s certainly possible that the system could sustain the disc swaps and data transfer necessary to provide data center cold storage archive.

And the software

But there’s plenty of software that has to surround an optical library to make it useful. Somehow we would want to be able to identify a file as a candidate for cold storage then have it moved to some cold storage disc(s), cataloged, and then deleted from the non-cold storage repository.  Of course, we probably want 2 or more copies to be written, maybe these redundant copies should be written to different facilities or at least different cabinets.  The catalog to the cold storage repository is all important and needs to be available 24X7 so this needs to be redundant/protected, updated with extreme care, and from my perspective on some sort of high-speed storage to handle archives of 3EB.

What about OpenStack? Although there have been some rumblings by Oracle and others to provide tape support in OpenStack, nothing seems to be out yet. However, it’s not much of a stretch to see removable media support in OpenStack, if some large company were to put some effort into it.

Other cold storage alternatives

In the video, Facebook says they currently have 30PB of cold storage at one facility and are already in the process of building another. They said that they should have 150PB of cold storage online shortly and that each cold storage facility is capable of holding 3EB or 3,000PB of cold storage.

A couple of years back at Hitachi in Japan, we were shown a Blu Ray optical disc library using 50GB discs. This was just a prototype but they were getting pretty serious about it then. We also saw an update of this at an analyst meeting at HDS, a year or so later. So there’s at least one storage company working on this technology.

Facebook, seems to have decided they were better off developing their own approach. It’s probably more dense/space efficient and maybe even more power efficient but to tell that would take some spec comparisons which aren’t available from Facebook or HDS just yet.

Why not magnetic tape?

I see these large storage repository sizes and wonder if Facebook might not be better off using magnetic tape. It has a much larger capacity and I believe magnetic tape (LTO or enterprise) would supply better volumetric (bytes/in**3) density than the Blu Ray cabinet they showed in the video.

Facebook said that BluRay discs had a 50 year lifetime.  I believe enterprise and LTO tape vendors say their cartridges have a 30 year lifetime. And that might be one consideration driving them to optical.

The reality is that new LTO technology is coming out every 2-3 years or so, and new drives read only 2 generations back and write only the current technology. With that quick a turnover, a data center would probably have to migrate data from old to new tape technology every decade or so before old tape drives go out of warranty.

I have not seen any Blu Ray technology roadmaps so it’s hard to make a comparison, but to date, PC based Blu Ray drives typically can read and write CDs, DVDs, and current Blu Ray disks (which is probably 4 to 5 generations back). So they have a better reputation for backward compatibility over time.

Tape technology roadmaps are so quick because tape competes with disk, which doubles capacity every 18 months or so. I am sure tape drive and media vendors would be happy not to upgrade their technology so fast but then disk storage would take over more and more tape storage applications.

If Blu Ray were to become a data center storage standard, as Facebook seems to want, I believe that Blu Ray technology would fall under similar competitive pressures from both disk and tape to upgrade optical technology at a faster rate. When that happens, it would be interesting to see how quickly optical drives stop supporting the backward compatibility that they currently support.

Comments?

Photo Credit: [73/366] Grooves by Dwayne Bent [Ed. note, picture of DVD, not Blu Ray disc]

 

 

“… would consume nearly half the world’s digital storage capacity.”

A recent National Geographic article on recent research into the brain (February 2014) said something which I find intriguing. “Producing an image of an entire human brain at the same resolution [as a mouse brain] would consume nearly half of the world’s current digital storage capacity.”

They were imaging slices of a mouse brain with an electron microscope, in slices one millimeter square, at a micron in depth, representing just a thousand cubic microns per image. Such a scan of the full mouse brain would require 450,000 TB (0.45 EB, exabyte=10E18 bytes) of storage for the images.

Getting an equivalent resolution image of a single human brain would require 1.3 billion TB (or 1.3 ZB, zettabyte=10E21 bytes).  They went on to say that the world’s digital storage was just 2.7 billion TB (or 2.7 ZB), which is where they came up with the “… nearly half the world’s digital storage capacity.”

So how much digital storage is there in the world today

Setting aside the need for such a detailed map for the moment. Let’s talk about the world’s digital storage.

  • Tape – I don’t have much information about the enterprise tape capacity currently available in IBM TS1120/TS1130 or Oracle T10000C/B/A but a relatively recent article indicated that the 225 millionth LTO cartridge was shipped sometime in 3Q13 which represented a capacity of 90,000 PB (or 90 EB, exabyte=10E18 bytes) of storage capacity
  • Disk – Although I couldn’t find a reasonable estimate of installed disk capacity, IDC reported that 2012 disk capacity shipments were 20EB and through 3Q13 there had been 24.3EB shipped. It’s probably safe to assume that capacity shipments were ~8.3EB or more in 4Q13 so we have shipped ~32.5EB of disk capacity in 2013. One estimate of worldwide disk storage capacity (also provided by IDC) is that we are doubling worldwide disk storage capacity every two years so one estimate of installed disk capacity as of the end of 4Q13 is something on the order of 113.6EB of disk storage.

I won’t delve into optical storage as that’ s even more difficult to get a handle on but my guess is it’s not quite to the level of LTO digital storage so maybe another 90EB there for a total of  ~0.3ZB of digital storage in disks, LTO tape and optical.

However, back in February of 2010, researchers reported in Science that the world’s information storage capacity was 2.0 ZB of storage. Also, last October IDC reported that the US alone had a digital storage capacity of 2.6 ZB and that the US had somewhere between 24 to 40% of the world’s storage. Let’s use 33%, for simplicity sake, this would put world’s digital capacity at around 7.8ZB of storage according to IDC.

Thankfully, a human brain scan at the resolutions above would take only a sixth of the world’s digital storage based on my estimates.

But, we really need to talk about data reduction techniques

I think we need to start discussing some form of data reduction, data compression/fractal compression or even graphical encoding. For example, with appropriate software and compute power the neural scans could be encoded at appropriate levels of detail into a graphical representation. Hopefully, this should be many orders of magnitude less storage intensive. So maybe only 1/600th to 1/60,000 of all the world’s digital storage

Another approach might be to use a form of fractal compression similar to that done in motion pictures/photographic images. Perhaps, I am being naive but it seems to me that there ought to be some form of fractal encoding of neural branching. Most of nature’s branching structures have an underlying fractal basis and I see nothing in neural anatomy that would show me it’s any different.

Of course, I am not a neural biologist, but I am a storage expert and there’s got to be a way to reduce this data load somehow.

Comments?

Photo Credit: Microscopic embryonic mouse brain (DAPI, GFP) by Joseph Elsbernd

Super long term archive

Read an article this past week in Scientific American about a new fused silica glass storage device from Hitachi Ltd., announced last September. The new media is recorded with lasers burning dots which represent binary one or leaving spaces which represents binary 0 onto the media.

As can be seen in the photos above, the data can readily be read by microscope which makes it pretty easy for some future civilization to read the binary data. However, knowing how to decode the binary data into pictures, documents and text is another matter entirely.

We have discussed the format problem before in our Today’s data and the 1000 year archive as well as Digital Rosetta stone vs. 3D barcodes posts. And this new technology would complete with the currently available, M-disc long term achive-able, DVD technology from Millenniata which we have also talked about before.

Semi-perpetual storage archive!!

Hitachi tested the new fused silica glass storage media at 1000C for several hours which they say indicates that it can survive several 100 million years without degradation. At this level it can provide a 300 million year storage archive (M-disc only claims 1000 years).   They are calling their new storage device, “semi-perpetual” storage.  If 100s of millions of years is semi-perpetual, I gotta wonder what perpetual storage might look like.

At CD recording density, with higher densities possible

They were able to achieve CD levels of recording density with a four layer approach. This amounted to about 40Mb/sqin.  While DVD technology is on the order of 330Mb/sqin and BlueRay is ~15Gb/sqin, but neither of these technologies claim even a million year lifetime.   Also, there is the possibility of even more layers so the 40Mb/sqin could double or quadruple potentially.

But data formats change every few years nowadays

My problem with all this is the data format issue, we will need something like a digital rosetta stone for every data format ever conceived in order to make this a practical digital storage device.

Alternatively we could plan to use it more like an analogue storage device, with something like a black and white or grey scale like photographs of  information to be retained imprinted in the media.  That way, a simple microscope could be used to see the photo image.  I suppose color photographs could be implemented using different plates per color, similar to four color magazine production processing. Texts could be handled by just taking a black and white photo of a document and printing them in the media.

According to a post I read about the size of the collection at the Library of Congress, they currently have about 3PB of digital data in their collections which in 650MB CD chunks would be about 4.6M CDs.  So if there is an intent to copy this data onto the new semi-perpetual storage media for the year 300,002012 we probably ought to start now.

Another tidbit to add to the discussion at last months Hitachi Data Systems Influencers Summit, HDS was showing off some of their recent lab work and they had an optical jukebox on display that they claimed would be used for long term archive. I get the feeling that maybe they plan to commercialize this technology soon – stay tuned for more

 

~~~~

Image: Hitachi.com website (c) 2012 Hitachi, Ltd.,

Million year optical disk

Read an article the other day about scientists creating an optical disk that would be readable in a million years or so. The article in Science Mag titled A million – year hard disk was intended to warn people about potential dangers in the way future that were being created today.

A while back I wrote about a 1000 year archive which was predominantly about disappearing formats. At the time, I believed given the growth in data density that information could easily be copied and saved over time but the formats for that data would be long gone by the time someone tried to read it.

The million year optical disk eliminates the format problem by using pixelated images etched on media. Which works just dandy if you happen to have a microscope handy.

Why would you need a million year disk

The problem is how do you warn people in the far future not to mess with radioactive waste deposits buried below. If the waste is radioactive for a million years, you need something around to tell people to keep away from it.

Stone markers last for a few thousand years at best but get overgrown and wear down in time. For instance, my grandmother’s tombstone in Northern Italy has already been worn down so much that it’s almost unreadable. And that’s not even 80 yrs old yet.

But a sapphire hard disk that could easily be read with any serviceable microscope might do the job.

How to create a million year disk

This new disk is similar to the old StorageTek 100K year optical tape. Both would depend on microscopic impressions, something like bits physically marked on media.

For the optical disk the bits are created by etching a sapphire platter with platinum. Apparently the prototype costs €25K but they’re hoping the prices go down with production.

There are actually two 20cm (7.9in) wide disks that are molecularly fused together and each disk can store 40K miniaturized pages that can hold text or images. They are doing accelerated life testing on the sapphire disks by bathing them in acid to insure a 10M year life for the media and message.

Presumably the images are grey tone (or in this case platinum tone). If I assume 100Kbytes per page that’s about 4GB, something around a single layer DVD disk in a much larger form factor.

Why sapphire

It appears that sapphire is available from industrial processes and it seems impervious to wear that harms other material. But that’s what they are trying to prove.

Unclear why the decided to “molecularly” fuse two platters together. It seems to me this could easily be a weak link in the technology over the course of dozen millennia or so. On the other hand, more storage is always a good thing.

~~~~

In the end, creating dangers today that last millions of years requires some serious thought about how to warn future generations.

Image: Clock of the Long Now by Arenamontanus