New PCM could supply 36PB of memory to CPUs

Read an article this past week on how quantum geometry can enable a new form of PCM (phase change memory) that is based on stacks of metallic layers (SciTech Daily article: Berry curvature memory: quantum geometry enables information storage in metallic layers), That article referred to a Nature article (Berry curvature memory through electrically driven stacking transitions) behind a paywall but I found a pre-print of it, Berry curvature memory through electrically driven stacking transitions.

Figure 1| Signatures of two different electrically-driven phase transitions in WTe2. a, Side view (b–c plane) of unit cell showing possible stacking orders in WTe2 (monoclinic 1T’, polar orthorhombic Td,↑ or Td,↓) and schematics of their Berry curvature distributions in momentum space. The spontaneous polarization and the Berry curvature dipole are labelled as P and D, respectively. The yellow spheres refer to W atoms while the black spheres represent Te atoms. b, Schematic of dual-gate h-BN capped WTe2 evice. c, Electrical conductance G with rectangular-shape hysteresis (labeled as Type I) induced by external doping at 80 K. Pure doping was applied following Vt/dt = Vb/db under a scan sequence indicated by black arrows. d, Electrical conductance G with butterfly-shape switching (labeled as Type II) driven by electric field at 80 K. Pure E field was applied following -Vt/dt = Vb/db under a scan sequence indicated by black arrows. Positive E⊥ is defined along +c axis. Based on the distinct hysteresis observations in c and d, two different phase transitions can be induced by different gating configurations.

The number one challenge in IT today,is that data just keeps growing. 2+ Exabytes today and much more tomorrow.

All that information takes storage, bandwidth and ultimately some form of computation to take advantage of it. While computation, bandwidth, and storage density all keep going up, at some point the energy required to read, write, transmit and compute over all these Exabytes of data will become a significant burden to the world.

PCM and other forms of NVM such as Intel’s Optane PMEM, have brought a step change in how much data can be stored close to server CPUs today. And as, Optane PMEM doesn’t require refresh, it has also reduced the energy required to store and sustain that data over DRAM. I have no doubt that density, energy consumption and performance will continue to improve for these devices over the coming years, if not decades.

In the mean time, researchers are actively pursuing different classes of material that could replace or improve on PCM with even less power, better performance and higher densities. Berry Curvature Memory is the first I’ve seen that has several significant advantages over PCM today.

Berry Curvature Memory (BCM)

I spent some time trying to gain an understanding of Berry Curvatures.. As much as I can gather it’s a quantum-mechanical geometric effect that quantifies the topological characteristics of the entanglement of electrons in a crystal. Suffice it to say, it’s something that can be measured as a elecro-magnetic field that provides phase transitions (on-off) in a metallic crystal at the topological level. 

In the case of BCM, they used three to five atomically thin, mono-layers of  WTe2 (Tungsten Ditelluride), a Type II  Weyl semi-metal that exhibits super conductivity, high magneto-resistance, and the ability to alter interlayer sliding through the use of terahertz (Thz) radiation. 

It appears that by using BCM in a memory, 

Fig. 4| Layer-parity selective Berry curvature memory behavior in Td,↑ to Td,↓ stacking transition. a,
The nonlinear Hall effect measurement schematics. An applied current flow along the a axis results in the generation of nonlinear Hall voltage along the b axis, proportional to the Berry curvature dipole strength at the Fermi level. b, Quadratic amplitude of nonlinear transverse voltage at 2ω as a function of longitudinal current at ω. c, d, Electric field dependent longitudinal conductance (upper figure) and nonlinear Hall signal (lower figure) in trilayer WTe2 and four-layer WTe2 respectively. Though similar butterfly-shape hysteresis in longitudinal conductance are observed, the sign of the nonlinear Hall signal was observed to be reversed in the trilayer while maintaining unchanged in the four-layer crystal. Because the nonlinear Hall signal (V⊥,2ω / (V//,ω)2 ) is proportional to Berry curvature dipole strength, it indicates the flipping of Berry curvature dipole only occurs in trilayer. e, Schematics of layer-parity selective symmetry operations effectively transforming Td,↑ to Td,↓. The interlayer sliding transition between these two ferroelectric stackings is equivalent to an inversion operation in odd layer while a mirror operation respect to the ab plane in even layer. f, g, Calculated Berry curvature Ωc distribution in 2D Brillouin zone at the Fermi level for Td,↑ and Td,↓ in trilayer and four-layer WTe2. The symmetry operation analysis and first principle calculations confirm Berry curvature and its dipole sign reversal in trilayer while invariant in four-layer, leading to the observed layer-parity selective nonlinear Hall memory behavior.
  • To alter a memory cell takes “a few meV/unit cell, two orders of magnitude less than conventional bond rearrangement in phase change materials” (PCM). Which in laymen’s terms says it takes 100X less energy to change a bit than PCM.
  • To alter a memory cell it uses terahertz radiation (Thz) this uses pulses of light or other electromagnetic radiation whose wavelength is on the order of picoseconds or less to change a memory cell. This is 1000X faster than other PCM that exist today.
  • To construct a BCM memory cell takes between 13 and 16  atoms of W and Te2 constructed of 3 to 5 layers of atomically thin, WTe2 semi-metal.

While it’s hard to see in the figure above, the way this memory works is that the inner layer slides left to right with respect to the picture and it’s this realignment of atoms between the three or five layers that give rise to the changes in the Berry Curvature phase space or provide on-off switching.

To get from the lab to product is a long road but the fact that it has density, energy and speed advantages measured in multiple orders of magnitude certainly bode well for it’s potential to disrupt current PCM technologies.

Potential problems with BCM

Nonetheless, even though it exhibits superior performance characteritics with respect to PCM, there are a number of possible issues that could limit it’s use.

One concern (on my part) is that the inner-layer sliding may induce some sort of fatigue. Although, I’ve heard that mechanical fatigue at the atomic level is not nearly as much of a concern as one sees in (> atomic scale and) larger structures. I must assume this would induce some stress and as such, limit the (Write cycles) endurance of BCM.

Another possible concern is how to shrink size of the Thz radiation required to only write a small area of the material. Yes one memory cell can be measured bi the width of 3 atoms, but the next question is how far away do I need to place the next memory cell. The laser used in BCM focused down to ~1.5 μm. At this size it’s 1,000X bigger than the BCM memory cell width (~1.5 nm).

Yet another potential problem is that current BCM must be embedded in a continuous flow of liquid nitrogen (@80K). Unclear how much of a requirement this temperature is for BCM to function. But there are no computers nowadays that require this level of cooling.

Figure 3| Td,↑ to Td,↓ stacking transitions with preserved crystal orientation in Type II hysteresis. a,
in-situ SHG intensity evolution in Type II phase transition, driven by a pure E field sweep on a four-layer and a five-layer Td-WTe2 devices (indicated by the arrows). Both show butterfly-shape SHG intensity hysteresis responses as a signature of ferroelectric switching between upward and downward polarization phases. The intensity minima at turning points in four-layer and five-layer crystals show significant difference in magnitude, consistent with the layer dependent SHG contrast in 1T’ stacking. This suggests changes in stacking structures take place during the Type II phase transition, which may involve 1T’ stacking as the intermediate state. b, Raman spectra of both interlayer and intralayer vibrations of fully poled upward and downward polarization phases in the 5L sample, showing nearly identical characteristic phonons of polar Td crystals. c, SHG intensity of fully poled upward and downward polarization phases as a function of analyzer polarization angle, with fixed incident polarization along p direction (or b axis). Both the polarization patterns and lobe orientations of these two phases are almost the same and can be well fitted based on the second order susceptibility matrix of Pm space group (Supplementary Information Section I). These observations reveal the transition between Td,↑ and Td,↓ stacking orders is the origin of
Type II phase transition, through which the crystal orientations are preserved.

Finally, from my perspective, can such a memory can be stacked vertically, with a higher number of layers. Yes there are three to five layers of the WTe2 used in BCM but can you put another three to five layers on top of that, and then another. Although the researchers used three, four and five layer configurations, it appears that although it changed the amplitude of the Berry Curvature effect, it didn’t seem to add more states to the transition.. If we were to more layers of WTe2 would we be able to discern say 16 different states (like QLC NAND today).

~~~~

So there’s a ways to go to productize BCM. But, aside from eliminating the low-temperature requirements, everything else looks pretty doable, at least to me.

I think it would open up a whole new dimension of applications, if we had say 60TB of memory to compute with, don’t you think?

Comments?

[Updated the title from 60TB to PB to 36PB as I understood how much memory PMEM can provide today…, the Eds.]

Photo Credit(s):

Open source ASICs – Hardware vs. Software innovation (round 5)

A good friend of mine sent me an article yesterday (Produce your own physical chips for free, in the open) that announced a collaboration between Google, Skywater Technology Foundry and FOSSi (Free and open source silicon) Foundation that ultimately supplies a completely open source set of tools to create ASICs at 130nm node ASIC level. The last piece of this toolkit was an open source PDK (Process Design Kit) data that was produced by Google-Skywater technologies and their offer for free fab services to manufacture chips that were designed with the tool set.

Layout snapshots of 2D and 3D ICs designed in 130-nm process technology: (a) 2D IC (2D-130); (b) the top and bottom tiers of a 3D IC using macro-level partitioning (3D-MP-130); and (c) the top and bottom tiers of a 3D IC using pipeline-level partitioning (3D-PP-130). 

The industry and I have had a long term discussion in this blog and elsewhere about the superiority of hardware innovation vs. software innovation using commodity hardware (e.g., see TPU and hardware vs. software innovation (Round 3) and Hardware vs. software innovation – Round 4). Most of the tech industry believes that software innovation on commodity hardware is better than hardware innovation. We beg to differ and in our mind, it’s the combination of hardware AND software innovation that is remaking the world.

Much of this can be seen with smart phone technology. The smart phone would not be possible without significant hardware innovation and has supplied ubiquitous computing for the world. That is it has connected billions to the internet that had no connection before.

But historically, hardware innovation has been hard to do, took a long time, and costs a lot vs. software innovation with commodity hardware, which by definition, is easier to do, takes almost no time (with continuous innovation even less) and costs almost nothing, especially when using open source.

The one innovation that emerged over the last few decades to make new hardware creation easier, has been the FPGA. FPGAs allow for “programing” hardware logic in the lab (sometime in the field) rather than having it be set in silicon in the fab. The toolchains to support FPGA programming can be proprietary but some are also available in open source. For example, SymbiFlow (open source) takes in Verilog (IEEE standard hardware definition language) and converts it into a binary bit stream used to program most (Xilinx-7 and Lattice) FPGAs.

But this recent announcement makes the process to create ASICs completely open source and much easier and cheaper to do

ASICs design flow

Prior to this announcement, most PDKs were expensive and specific to a particular FAB and process node. With Google’s and Skywater’s release of open source PDK (on GitHub) data, designers and engineers now have a completely open source tool kit that is they have RTL (Register-transfer-level, hardware description logic) design tools, EDA tools and PDK data to create their own ASICs. And with this toolkit Skywater together with Google will manufacture ASICs for you, at no cost.

The FOSSi dial up talk (embedded in the announcement above) goes into much detail about the FPGA and ASIC tool chain. but prior to this announcement the PDK data which is used to help the RTL and EDA tools simulate, verify and determine the optimum layout for the hardware design was always proprietary.

Open source RTL tools have been available for years now starting with OpenCores, OpenRISC, RISC-V and now OpenPower. RISC-V and OpenPower include RTL to implement sophisticade instruction set CPUs. OpenRISC is RTL for a precursor to RISC-V and OpenCores supplies the RTL for a number of other (CPU) cores. But this is just a sample of the RTL that’s available in open source.

EDA Tools are also available in open source. The most recent incarnation would be the DARPA funded, OpenROAD project. OpenROAD will ultimately provide a completely open source EDA Tool set for electronic design. The first component of this is a set of EDA tools that convert RTL to GDS II (industry standard graphical design stream description of a IC chip componentry and layout). GDS II streams are used to create masks for IC fabrication.

And now with the open source Google-Skywater PDK data, one has a complete open source tool chain to create ASICs at the 130nm node level for the Skywater Fab in Minnesota.

A PDK contains a lot of data about the ASIC fabrication process including process design rules, analog and digital design cells and models, behavioral models for analog and digital design, extracted data for simulation and other supporting functionality.

The Google-Skywater Technologies open source PDK is Apache 2.0 Licensed. The PDK is used in the SKY130 process node, which includes 130nm technologies, high voltage support, 5 metal layers and one interconnect layer.

At the moment the PDK includes standard digital cell support (“nor” gates, “and” gates, flip flops, etc.) but over time they are planning to add analog cells, IO & periphery cells, analog RF as well a fully automated design rule checking, with SRAM/flash build spaces.

The PDK does include standard SRAM bit cells and in combination with OpenRAM project one can use SRAM cells to create SRAM memory for the ASIC.

Google-Skywater are going to be fabricating, for free, up to 40 ASIC designs starting in Fall of 2020 and then six months later, they will start fabricating ~40 ASICs ever 3 months.

However to qualify for free fabrication, your design has to be completely open source (located on GitHub). To submit your ASIC you need to send your public GitHub URL repository to efabless and they will perform verification processes on it. If it works, they will respond with an email that it was accepted. If more than 40 designs are submitted for a run, the Google-Skywater team will decide on which 40 will be manufactured

The 16mm**2 ASIC automatically comes with a RISC-V CPU, RAM and power plus ~40 IOs. There is another 10mm**2 space for all of your ASIC specific logic. If successful, you will get back ~100 to 400 packaged chips.

~~~~

ASICs were always lengthy and costly to design and then fabrication took more money and time, before you got anything back to test. With Open source tool kits, design should no longer cost anything but engineering time and with the sophistication available is todays toolchain, should not be that lengthy. And if your one of the lucky 40 designs, ASIC fabrication is free. And then starting next year fabrication runs will occur every 3 months. So you could potentially get your design back in an ASIC in as little as 3 months.

And while the 130nm technology node dates back to 2001-2003, there were plenty of sophisticated ASICss made during those years (at a previous job, we did a couple ourselves). And of course, with your very own RISC-V CPU inside, you could pretty much do anything you want with your ASIC. Yeah RAM, SRAM and other constraints may limit you, but that’s what hardware innovation is all about, deal with the physical constraints but open up a whole new architectural world.

Welcome to the a new era of ASIC (hardware) innovation.

Photo Credit(s):

Breaking optical data transmission speed records

Read an article this week about records being made in optical transmission speeds (see IEEE Spectrum, Optical labs set terabit records). Although these are all lab based records, the (data center) single mode optical transmission speed shown below is not far behind the single mode fibre speed commercially available today. But the multi-mode long haul (undersea transmission) speed record below will probably take a while longer until it’s ready for prime time.

First up, data center optical transmission speeds

Not sure what your data center transmission rates are but it seems pretty typical to see 100Gbps these days and inter switch at 200Gbps are commercially available. Last year at their annual Optical Fiber Communications (OFC) conference, the industry was releasing commercial availability of 400Gbps and pushing to achieve 800Gbps soon.

Since then, the researchers at Nokia Bell Labs have been able to transmit 1.52Tbps through a single mode fiber over 80 km distance. (Unclear, why a data center needs an 80km single mode fibre link but maybe this is more for a metro area than just a datacenter.

Diagram of a single mode (SM) optical fiber: 1.- Core 8-10 µm; 2.- Cladding 125 µm; 3.- Buffer 250 µm; & 4.- Jacket 400 µm

The key to transmitting data faster across single mode fibre, is how quickly one can encode/decode data (symbols) both on the digital to analog encoding (transmitting) end and the analog to digital decoding (receiving) end.

The team at Nokia used a new generation silicon-germanium chip (55nm CMOS process) able to generate 128 gigabaud symbol transmission (encoding/decoding) with 6.2 bits per symbol across single mode fiber.

Using optical erbium amplifiers, the team at Nokia was able to achieve 1.4Tbps over 240km of single mode fibre.

A wall-mount cabinet containing optical fiber interconnects. The yellow cables are single mode fibers; the orange and aqua cables are multi-mode fibers: 50/125 µm OM2 and 50/125 µm OM3 fibers respectively.

Used to be that transmitting data across single mode fibre was all about how quickly one could turn laser/light on and off. These days, with coherent transmission, data is being encoded/decoded in amplitude modulation, phase modulation and polarization (see Coherent data transmission defined article).

Nokia Lab’s is attempting to double the current 800Gbps data transmission speed or reach 1.6Tbps. At 1.52Tbps, they’re not far off that mark.

It’s somewhat surprising that optical single mode fibre technology is advancing so rapidly and yet, at the same time, commercially available technology is not that far behind.

Long haul optical transmission speed

Undersea or long haul optical transmission uses multi-core/mode fibre to transmit data across continents or an ocean. With multi-core/multi-mode fibre researchers and the Japan National Institute for Communications Technology (NICT) have demonstrated a 3 core, 125 micrometer wide long haul optical fibre transmission system that is able to transmit 172Tbps.

The new technology utilizes close-coupled multi-core fibre where signals in each individual core end up intentionally coupled with one another creating a sort of optical MIMO (Multi-input/Multi-output) transmission mechanism which can be disentangled with less complex electronics.

Although the technology is not ready for prime time, the closest competing technology is a 6-core fiber transmission cable which can transmit 144Tbps. Deployments of that cable are said to be starting soon.

Shouldn’t there be a Moore’s law for optical transmission speeds

Ran across this chart in a LightTalk Blog discussing how Moore’s law and optical transmission speeds are tracking one another. It seems to me that there’s a need for a Moore’s law for optical cable bandwidth. The blog post suggests that there’s a high correlation between Moore’s law and optical fiber bandwidth.

Indeed, any digital to analog optical encoding/decoding would involve TTL, by definition so there’s at least a high correlation between speed of electronic switching/processing and bandwidth. But number of transistors (as the chart shows) and optical bandwidth doesn’t seem to make as much sense probably makes the correlation evident. With the possible exception that processing speed is highly correlated with transistor counts these days.

But seeing the chart above shows that optical bandwidth and transistor counts are following each very closely.

~~~~

So, we all thought 100Gbps was great, 200Gbps was extraordinary and anything over that was wishful thinking. With, 400Gbps, 800 Gbps and 1.6Tbps all rolling out soon, data center transmission bottlenecks will become a thing in the past.

Picture Credit(s):

New science used to combat COVID-19 disease

Read an article last week in Science Magazine (A completely new culture on doing research… ) on how the way science is done to combat disease has changed the last few years.

In the olden days (~3-5 years ago), disease outbreaks would generate a slew of research papers to be written, submitted for publication and there they would sit, until peer-reviewed, after which they might get published for the world to see for the first time. Estimates I’ve seen say that the scientific research publishing process takes anywhere from one month (very fast) to 4-8 months, assuming no major revisions are required.

With the emergence of the Zika virus and recent Ebola outbreaks, more and more biological research papers have become available through pre-print servers. These are web-sites which accept any research before publication (pre-print), posting the research for all to see, comment and understand.

Open science via pre-print

Most of these pre-print servers focus on specific areas of science. For example bioRxiv is a pre-print server focused on Biology and medRxiv is for health sciences. On the other hand, arXiv is a pre-print server for “physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.” These are just a sampling of what’s available today.

In the past, scientific journals would not accept research that had been published before. But this slowly change as well. Now most scientific journals have policies gol pre-print publication and will also publish them if they deem it worthwhile, (see wikipedia article List of academic journals by pre-print policies).

As of today (9 March 2020) ,on biorXiv there are 423 papers with keyword=”coronavirus” and 52 papers with the keyword COVID-19, some of these may be the same. The newest (Substrate specificity profiling of SARS-CoV-2 Mpro protease provides basis for anti-COVID-19 drug design) was published on 3/7/2020. The last sentence in their abstract says “The results of our work provide a structural framework for the design of inhibitors as antiviral agents or diagnostic tests.” The oldest on bioRxiv is dated 23 January 2020. Similarly, there are 326 papers on medRxiv with the keyword “coronavirus”, the newest published 5 March 2020.

Pre-print research is getting out in the open much sooner than ever before. But the downside, is that pre-print papers may have serious mistakes or omissions in them as they are not peer-reviewed. So the cost of rapid openness is the possibility that some research may be outright wrong, badly done, or lead researchers down blind alleys.

However, the upside is any bad research can be vetted sooner, if it’s open to the world. We see similar problems with open source software, some of it can be buggy or outright failure prone. But having it be open, and if it’s popular, many people will see the problems (or bugs) and fixes will be rapidly created to solve them. With pre-print research, the comment list associated with a pre-print can be long and often will identify problems in the research.

Open science through open journals

In addition to pre-print servers , we are also starting to see the increasing use of open scientific journals such as PLOS to publish formal research.

PLOS has a number of open journals focused on specific arenas of research, such as PLOS Biology, PLOS Pathogyns, PLOS Medicine, etc.

Researchers or their institutions have to pay a nominal fee to publish in PLOS. But all PLOS publications are fully expert, peer-reviewed. But unlike research from say Nature, IEEE or other scientific journals, PLOS papers are free to anyone, and are widely available. (However, I just saw that SpringerNature is making all their coronavirus research free).

Open science via open data(sets)

Another aspect of scientific research that has undergone change of late is the sharing and publication of data used in the research.

Nature has a list of recommended data repositories. All these data repositories seem to be hosted by FAIRsharing at the University of Oxford and run by their Data Readiness Group. They list 1349 databases of which the vast majority (1250) are for the natural sciences with over 1380 standards used for data to be registered with FAIRsharing.

We’ve discussed similar data repositories in the past (please see Data banks, data deposits and data withdrawals, UK BioBank, Big open data leads to citizen science, etc). Having a place to store data used in research papers makes it easier to understand and replicate science.

Collaboration software

The other change to research activities is the use of collaborative software such as Slack. Researchers at UW Madison were already using Slack to collaborate on research but when Coronavirus went public, they Slack could help here too. So they created a group (or channel) under their Slack site called “Wu-han Clan” and invited 69 researchers from around the world. The day after they created it they held their first teleconference.

Other collaboration software exists today but Slack seems most popular. We use Slack for communications in our robotics club, blogging group, a couple of companies we work with, etc. Each has a number of invite-only channels, where channel members can post text, (data) files, links and just about anything else of interest to the channel.

Although I have not been invited to participate in Wu-han Clan (yet), I assume they usee Slack to discuss and vet (pre-print) research, discuss research needs, and other ways to avert the pandemic.

~~~~

So there you have it. Coronavirus scientific research is happening at warp speed compared to diseases of yore. Technologies to support this sped up research have all emerged over the last five to 10 years but are now being put to use more than ever before. Such technological advancement should lead to faster diagnosis, lower worldwide infection/mortality rates and a quicker medical solution.

Photo Credit(s):

AI processing at the edge

Read a couple of articles over the past few weeks (TechCrunch: Google is making a fast, specialized TPU chip for edge devices … and IEEE Spectrum: Two startups use processing in flash for AI at the edge) about chips for AI at the IoT edge.

The two startups, Syntiant and Mythic, are moving to analog only or analog-digital solutions to provide AI processing needed at the edge while Google is taking their TPU technology to the edge.  We have written about Google’s TPU before (see: TPU and hardware vs. software  innovation (round 3) post).

But first please take our new poll:

The major challenge in AI processing at the edge is power consumption. Both  startups attack the power problem by using flash and other analog circuitry to provide power efficient compute.

Google attacked the power problem with their original TPU by reducing computational precision from 64- to 8-bits. By reducing transistor counts, they lowered power requirements proportionally.

AI today is based on neural networks (NN), that connect simulated neurons via simulated synapses with weights attached to indicate whether to boost or decrease the signal being transmitted. AI learning is done by setting those weights and creating the connections between simulated neurons and the synapses.  So learning is setting weights and establishing connections. Actual inferences (using AI to do something) is a process of exciting input simulated neurons/synapses and letting the signal flow through the NN with each weight being used to determine output(s).

AI with standard compute

The problem with doing AI learning or inferencing with normal CPUs or even CUDAs is that the NN does thousands if not millions of  multiplication-accumulation actions at each simulated synapse-neuron connection. Doing all these multiplication-accumulation takes power. CPUs and CUDAs can do these sorts of operations on 32 or 64 bit numbers or even floating point but it still takes power.

AI processing power

AI processing power is measured in trillions of (accumulate-multiply) operations per second per watt (TOPS/W). Mythic believes it can perform 4 TOPS/W and Syntiant says it can do 20 TOPS/W. In comparison, the NVIDIA Volta V100 can do about 0.4 TOPS/W (according to the article). Although  comparing Syntiant-Mythic TOPS to NVIDIA TOPS is a little like comparing apples to oranges.

A current Intel Xeon Platinum 8180M (2.5Ghz, 28 Core processors, 205 W) can probably do (assuming one multiplication-accumulation per hertz) about 2.5 Billion X 28 Cores = 70 Billion Ops Second/205 W or 0.3 GOPS/W (source: Platinum 8180M Data sheet).

As for Google’s TPU TOPS/W, TPU2 is rated at 45 GFLOPS/chip and best guess for power consumption is between 160W and 200W, let’s say 180W. With power at that level, TPU2 should hit 0.25 GFLOPS/W.  TPU3 is coming out with 8X the power but it uses water cooling (read LOTS MORE POWER).

Nonetheless, it appears that Mythic and Syntiant are one to two orders of magnitude better than the best that NVIDIA and TPU2 can do today and many orders of magnitude better than Intel X86.

Improving TOPS/W

Using NAND, as an analog memory to read, write and hold  NN weights is an easy way to reduce power consumption. Combine that with  analog circuitry that can do multiplication and addition with those flash values and you have a AI NN processor. This way you reduce the need to hold weights in memory and do compute in registers by collapsing both compute and memory into the same componentry.

The major difference between Syntiant and Mythic seems to be the amount of analog circuitry they use. Mythic seems to relegate the analog circuitry to an accelerator while Syntiant has a more extensive use of analog circuitry throughout their chip. Probably why it can perform 5X the TOPS/W of Mythic’s IPU.

IBM and others have been working on neuromorphic chips some of which are analog based and others which are all digital based. We’ve written extensively on IBM and some on MIT’s approaches (for the latest on IBM see: More power efficient deep learning through IBM and PCM, and for MIT see: MIT builds an analog synapse chip) and follow the links there to learn more.

~~~~

Special purpose AI hardware is emerging from the labs and finally reaching reality. IBM R&D has been playing with it for a long time. Google is working on TPU3 so there’s no stopping them. And startups are seeing an opening and are taking everyone on. Stay tuned, were in for a good long ride before the someone rises above the crowd and becomes the next chip giant.

Comments?

Photo Credit(s): TechCrunch  Google is making a fast, specialized TPU chip for edge devices … article

Introduction to Digital Design Verification at Mythic, Medium.com Article

Images from Google Cloud Platform Blog on the TPU

Two startups use processing in flash for AI at the edge, IEEE Spectrum article courtesy of Mythic

Photonic or Optical FPGAs on the horizon

Read an article this past week (Toward an optical FPGA – programable silicon photonics circuits) on a new technology that could underpin optical  FPGAs. The technology is based on implantable wave guides and uses silicon on insulator technology which is compatible with current chip fabrication.

How does the Optical FPGA work

Their Optical FPGA is based on an eraseable direct coupler (DC) built using GE (Germanium) ion implantation. A DC is used when two optical waveguides are placed close enough together such that optical energy (photons) on one wave guide is switched over to the other, nearby wave guide.

As can be seen in the figure, the red (eraseable, implantable) and blue (conventional) wave guides are fabricated on the FPGA. The red wave guide performs the function of DC between the two conventional wave guides. The diagram shows both a single stage and a dual stage DC.

By using imlantable (eraseable) DCs, one can change the path of a photonic circuit by just erasing the implantable wave guide(s).

The GE ion implantable wave guides are erased by passing a laser over it and thus annealing (melting) it.

Once erased, the implantable wave guide DC no longer works. The chart on the left of the figure above shows how long the implantable wave guide needs to be to work. As shown above once erased to be shorter than 4-5µm, it no longer acts as a DC.

It’s not clear how one directs the laser to the proper place on the Optical FPGA to anneal the implantable wave guide but that’s a question of servos and mirrors.

Previous attempts at optical FPGAs, required applying continuing voltage to maintain the switched photonics circuits. Once voltage was withdrawn the photonics reverted back to original configuration.

But once an implantable wave guide is erased (annealed) in their approach, the changes to the Optical FPGA are permanent.

FPGAs today

Electronic FPGAs have never gone out of favor with customers doing hardware innovation. By supplying Optical FPGAs, the techniques in the paper would allow for much more photonics innovation as well.

Optics are primarily used in communications and storage (CD-DVDs) today. But quantum computing could potentially use photonics and there’s been talk of a 100% optical computer for a long time. As more and more photonics circuitry comes online, the need for an optical FPGA grows. The fact that it’s able to be grown on today’s fab lines makes it even more appealing.

But an FPGA is more than just directional control over (electronic or photonic) energy. One needs to have other circuitry in place on the FPGA for it to do work.

For example, if this were an electronic FPGA, gates, adders, muxes, etc. would all be somewhere on the FPGA

However, once having placed additional optical componentry on the FPGA, photonic directional control would be the glue that makes the Optical FPGA programmable.

Comments?

Photo Credit(s): All photos from Toward an optical FPGA – programable silicon photonics circuits paper

 

Skyrmion and chiral bobber solitons for racetrack storage

Read an article this week in Science Daily (Magnetic skyrmions: Not the only one of their class; …) about new magnetic structures that could lend themselves to creating a new type of moving, non-volatile storage.  (There’s more information in the press release and the Nature paper [DOI: 10.1038/s41565-018-0093-3], behind a paywall).

Skyrmions and chiral bobbers are both considered magnetic solitons, types of magnetic structures only 10’s of nm wide, that can move around, in sort of a race track configuration.

Delay line memories

Early in computing history, there was a type of memory called a delay line memory which used various mechanisms (mercury, magneto-resistence, capacitors, etc.) arranged along a circular line such as a wire, and had moving pulses of memory that raced around it. .

One problem with delay line memory was that it was accessed sequentially rather than core which could be accessed randomly. When using delay lines to change a bit, one had to wait until the bit came under the read/write head . It usually took microseconds for a bit to rotate around the memory line and delay line memories had a capacity of a few thousand bits 256-512 bytes per line,  in today’s vernacular.

Delay lines predate computers and had been used for decades to delay any electronic or acoustic signal before retransmission.

A new racetrack

Solitons are being investigated to be used in a new form of delay line memory, called racetrack memory. Skyrmions had been discovered a while ago but the existence of chiral bobbers was only theoretical until researchers discovered them in their lab.

Previously, the thought was that one would encode digital data with only skyrmions and spaces. But the discovery of chiral bobbers and the fact that they can co-exist with skyrmions, means that chiral bobbers and skyrmions can be used together in a racetrack fashion to record digital data.  And the fact that both can move or migrate through a material makes them ideal for racetrack storage.

Unclear whether chiral bobbers and skyrmions only have two states or more but the more the merrier for storage. I am assuming that bit density or reliability is increased by having chiral bobbers in the chain rather than spaces.

Unlike disk devices with both rotating media and moving read-write heads, the motion of skyrmion-chiral bobber racetrack storage is controlled by a very weak pulse of current and requires no moving/mechanical parts prone to wear/tear. Moreover, as a solid state devices, racetrack memory is not sensitive to induced/organic vibration or shock,  So, theoretically these devices should have higher reliability than disk devices.

There was no information comparing the new racetrack memory reliability to NAND or 3D Crosspoint/PCM SSDs, but there may be some advantage here as well. I suppose one would need to understand how to miniaturize the read-erase-write head to the right form factor for nm racetracks to understand how it compares.

And I didn’t see anything describing how long it takes to rotate through bits on a skyrmion-chiral bobber racetrack. Of course, this would depend on the number of bits on a racetrack, but some indication of how long it takes one bit to move, one postition on the racetrack would be helpful to see what its rotational latency might be.

~~~~

At the moment, reading and writing skyrmions and the newly discovered chiral bobbers takes a lot of advanced equipment and is only done in major labs. As such, I don’t see a skyrmion-chiral bobber racetrack storage device arriving on my desktop anytime soon. But the fact that there’s a long way to go before, we run out of magnetic storage options, even if it is on a chip rather than magnetic media,  is comforting to know. Even if we don’t ever come up with an economical way to produce it.

I wonder if you could synchronize rotational timing across a number of racetrack devices, at least that way you could be reading/erasing/writing a whole byte, word, double word etc, at a time, rather than a single bit.

Comments?

Photo Credit(s): From Experimental observation of chiral magnetic bobbers in B20 Type FeGe paper

From Experimental observation of chiral magnetic bobbers in B20 Type FeGe paper

From Timeline of computer history Magnetoresistive delay lines

From Experimental observation of chiral magnetic bobbers in B20 Type FeGe paper

Atomristors, a new single (atomic) layer memristor

Read an article the other day about the “Atomristor: non-volatile resistance  switching in atomic sheets of transition metal dichalcogenides” (TMDs), an ACS publication. The article shows research that discovered an atomic sheet level version of a memristor. The device is an atomic sheet of TMD that is sandwiched between two (gold, silver or graphene) electrodes.

They refer to the device switching non-volatile resistance (NVR) from low to high or vice versa but from our perspective it could just as easily be considered a non-volatile device usable for memory, storage, or electronic circuitry.

Prior to this research, it was believed that such resistance switching could not be accomplished with single atomic, sub-nanometre (0.7nm) sized, sheet of material.

NVR atomristor technological properties

The researchers discovered that NVR switching can occur at different device temperatures, sheet areas, compliance current, voltage sweep rate, and layer thickness. In all five degrees of freedom were tested to show that  TMD atomristors had wide applicability and allowed for significant environmental and electronic variability.

Not only was the effect extremely versatile, the researchers identified multiple materials which could be used for the atomic sheet. In fact, TMD are a class of materials and they showed 4 different TMD materials that had the NVR effect.

Surprisingly, some TMD materials exhibited the NVR effect using unipolar voltages and some using bipolar voltages, and some could use both.

The researchers went a long way to showing that the NVR was due to the atomic sheet. In one instance they specifically used non-lithographic measures to fabricate the devices. This process utilized graphene manufacturing like methods to produce an atomic sheet ontop of gold foil and depositing another gold layer ontop of that.

But they also used standard fabrication techniques to build the atomristor devices as well. Using these different fabrication methods, they were able to show the NVR effect using different electrodes types, testing gold, silver, and graphene, all of which worked similarly.

The paper talked of using atomristors in a software defined radio, as a electronic circuit/cross bar switch, or as a memory/storage device. But they also indicated that it could easily be used in a neuromorphic computer as well, effectively simulating neuron like computations.

There’s much more information in the ACS article.

How does it compare to flash?

As compared to flash, atomristors NVR devices should be able to provide higher levels (bits per mm) of density. And due to the lower current (~1v) required for (bipolar) NVR setting, reading and resetting, there’s a lower probability of leakage of stored charges as they’re scaled down to nm sizes.

And of course it comes in 2d sheets, so it’s just as amenable to 3D arrays as NAND and 3DX is today. That means that as fabs start scaling 3D NAND up in layers, atomristor NVR devices should be able to follow their technology roadmap to be scaled up just as high.

Atomristor computers, storage or switch devices

Going from the “lab” to an IT shop is a multifaceted endeavour that takes a lot of time. There are many steps to needed to get to commercialization and many lab breakthroughs never make it that far because of complexity, economics, and other factors.

For instance, memristors were first proposed in 1971 and HP(E) researchers first discovered material that could produce the memristor effect in 2008. In March 2012, HRL fabricated the first memristor chip on CMOS. In Dec. 2017, >9 years later, at their Discover Conference, HPE showed off “The Machine”, a prototype of a memristor based computer to the public. But we are still waiting to see one on the market for sale.

That being said, memristor technologies didn’t exist before 2008, so the use of these devices in a computer took sometime to be understood. The fact that atomristors are “just” an extremely, thinner version of memristors should help it be get to market faster that original memristor technologies. But how much faster than 9-12 years is anyone’s guess.

~~~~

Comments?

Picture Credit(s): All graphics and pictures are from the article in ACS