New era of graphical AI is near #AIFD2 @Intel

I attended AIFD2 ( videos of their sessions available here) a couple of weeks back and for the last session, Intel presented information on what they had been working on for new graphical optimized cores and a partner they have, called Katana Graph, which supports a highly optimized graphical analytics processing tool set using latest generation Xeon compute and Optane PMEM.

What’s so special about graphs

The challenges with graphical processing is that it’s nothing like standard 2D tables/images or 3D oriented data sets. It’s essentially a non-Euclidean data space that has nodes with edges that connect them.

But graphs are everywhere we look today, for instance, “friend” connection graphs, “terrorist” networks, page rank algorithms, drug impacts on biochemical pathways, cut points (single points of failure in networks or electrical grids), and of course optimized routing.

The challenge is that large graphs aren’t easily processed with standard scale up or scale out architectures. Part of this is that graphs are very sparse, one node could point to one other node or to millions. Due to this sparsity, standard data caching fetch logic (such as fetching everything adjacent to a memory request) and standardized vector processing (same instructions applied to data in sequence) don’t work very well at all. Also standard compute branch prediction logic doesn’t work. (Not sure why but apparently branching for graph processing depends more on data at the node or in the edge connecting nodes).

Intel talked about a new compute core they’ve been working on, which was was in response to a DARPA funded activity to speed up graphical processing and activities 1000X over current CPU/GPU hardware capabilities.

Intel presented on their PIUMA core technology was also described in a 2020 research paper (Programmable Integrated and Unified Memory Architecture) and YouTube video (Programmable Unified Memory Architecture).

Intel’s PIUMA Technology

DARPA’s goals became public in 2017 and described their Hierarchical Identity Verify Exploit (HIVE) architecture. HIVE is DOD’s description of a graphical analytics processor and is a multi-institutional initiative to speed up graphical processing. .

Intel PIUMA cores come with a multitude of 64-bit RISC processor pipelines with a global (shared) address space, memory and network interfaces that are optimized for 8 byte data transfers, a (globally addressed) scratchpad memory and an offload engine for common operations like scatter/gather memory access.

Each multi-thread PIUMA core has a set of instruction caches, small data caches and register files to support each thread (pipeline) in execution. And a PIUMA core has a number of multi-thread cores that are connected together.

PIUMA cores are optimized for TTEPS (Tera-Traversed Edges Per Second) and attempt to balance IO, memory and compute for graphical activities. PIUMA multi-thread cores are tied together into (completely connected) clique into a tile, multiple tiles are connected within a single node and multiple nodes are tied together with a 8 byte transfer optimized network into a PIUMA system.

P[I]UMA (labeled PUMA in the video) multi-thread cores apparently eschew extensive data and instruction caching to focus on creating a large number of relatively simple cores, that can process a multitude of threads at the same time. Most of these threads will be waiting on memory, so the more threads executing, the less likely that whole pipeline will need to be idle, and hopefully the more processing speedup can result.

Performance of P[I]UMA architecture vs. a standard Xeon compute architecture on graphical analytics and other graph oriented tasks were simulated with some results presented below.

Simulated speedup for a single node with P[I]UMAtechnology vs. Xeon range anywhere from 3.1x to 279x and depends on the amount of computation required at each node (or edge). (Intel saw no speedups between a single Xeon node and multiple Xeon Nodes, so the speedup results for 16 P[I]UMA nodes was 16X a single P[I]UMA node).

Having a global address space across all PIUMA nodes in a system is pretty impressive. We guess this is intrinsic to their (large) graph processing performance and is dependent on their use of photonics HyperX networking between nodes for low latency, small (8 byte) data access.

Katana Graph software

Another part of Intel’s session at AIFD2 was on their partnership with Katana Graph, a scale out graph analytics software provider. Katana Graph can take advantage of ubiquitous Xeon compute and Optane PMEM to speed up and scale-out graph processing. Katana Graph uses Intel’s oneAPI.

Katana graph is architected to support some of the largest graphs around. They tested it with the WDC12 web data commons 2012 page crawl with 3.5B nodes (pages) and 128B connections (links) between nodes.

Katana runs on AWS, Azure, GCP hyperscaler environment as well as on prem and can scale up to 256 systems.

Katana Graph performance results for Graph Neural Networks (GNNs) is shown below. GNNs are similar to AI/ML/DL CNNs but use graphical data rather than images. One can take a graph and reduce (convolute) and summarize segments to classify them. Moreover, GNNs can be used to understand whether two nodes are connected and whether two (sub)graphs are equivalent/similar.

In addition to GNNs, Katana Graph supports Graph Transformer Networks (GTNs) which can analyze meta paths within a larger, heterogeneous graph. The challenge with large graphs (say friend/terrorist networks) is that there are a large number of distinct sub-graphs within the graph. GTNs can break heterogenous graphs into sub- or meta-graphs, which can then be used to understand these relationships at smaller scales.

At AIFD2, Intel also presented an update on their Analytics Zoo, which is Intel’s MLops framework. But that will need to wait for another time.

~~~~

It was sort of a revelation to me that graphical data was not amenable to normal compute core processing using today’s GPUs or CPUs. DARPA (and Intel) saw this defect as a need for a completely different, brand new compute architecture.

Even so, Intel’s partnership with Katana Graph says that even today compute environment could provide higher performance on graphical data with suitable optimizations.

It would be interesting to see what Katana Graph could do using PIUMA technology and appropriate optimizations.

In any case, we shouldn’t need to wait long, Intel indicated in the video that P[I]UMA Technology chips could be here within the next year or so.

Comments?

Photo Credit(s):

  • From Intel’s AIFD2 presentations
  • From Intel’s PUMA you tube video

Breaking optical data transmission speed records

Read an article this week about records being made in optical transmission speeds (see IEEE Spectrum, Optical labs set terabit records). Although these are all lab based records, the (data center) single mode optical transmission speed shown below is not far behind the single mode fibre speed commercially available today. But the multi-mode long haul (undersea transmission) speed record below will probably take a while longer until it’s ready for prime time.

First up, data center optical transmission speeds

Not sure what your data center transmission rates are but it seems pretty typical to see 100Gbps these days and inter switch at 200Gbps are commercially available. Last year at their annual Optical Fiber Communications (OFC) conference, the industry was releasing commercial availability of 400Gbps and pushing to achieve 800Gbps soon.

Since then, the researchers at Nokia Bell Labs have been able to transmit 1.52Tbps through a single mode fiber over 80 km distance. (Unclear, why a data center needs an 80km single mode fibre link but maybe this is more for a metro area than just a datacenter.

Diagram of a single mode (SM) optical fiber: 1.- Core 8-10 µm; 2.- Cladding 125 µm; 3.- Buffer 250 µm; & 4.- Jacket 400 µm

The key to transmitting data faster across single mode fibre, is how quickly one can encode/decode data (symbols) both on the digital to analog encoding (transmitting) end and the analog to digital decoding (receiving) end.

The team at Nokia used a new generation silicon-germanium chip (55nm CMOS process) able to generate 128 gigabaud symbol transmission (encoding/decoding) with 6.2 bits per symbol across single mode fiber.

Using optical erbium amplifiers, the team at Nokia was able to achieve 1.4Tbps over 240km of single mode fibre.

A wall-mount cabinet containing optical fiber interconnects. The yellow cables are single mode fibers; the orange and aqua cables are multi-mode fibers: 50/125 µm OM2 and 50/125 µm OM3 fibers respectively.

Used to be that transmitting data across single mode fibre was all about how quickly one could turn laser/light on and off. These days, with coherent transmission, data is being encoded/decoded in amplitude modulation, phase modulation and polarization (see Coherent data transmission defined article).

Nokia Lab’s is attempting to double the current 800Gbps data transmission speed or reach 1.6Tbps. At 1.52Tbps, they’re not far off that mark.

It’s somewhat surprising that optical single mode fibre technology is advancing so rapidly and yet, at the same time, commercially available technology is not that far behind.

Long haul optical transmission speed

Undersea or long haul optical transmission uses multi-core/mode fibre to transmit data across continents or an ocean. With multi-core/multi-mode fibre researchers and the Japan National Institute for Communications Technology (NICT) have demonstrated a 3 core, 125 micrometer wide long haul optical fibre transmission system that is able to transmit 172Tbps.

The new technology utilizes close-coupled multi-core fibre where signals in each individual core end up intentionally coupled with one another creating a sort of optical MIMO (Multi-input/Multi-output) transmission mechanism which can be disentangled with less complex electronics.

Although the technology is not ready for prime time, the closest competing technology is a 6-core fiber transmission cable which can transmit 144Tbps. Deployments of that cable are said to be starting soon.

Shouldn’t there be a Moore’s law for optical transmission speeds

Ran across this chart in a LightTalk Blog discussing how Moore’s law and optical transmission speeds are tracking one another. It seems to me that there’s a need for a Moore’s law for optical cable bandwidth. The blog post suggests that there’s a high correlation between Moore’s law and optical fiber bandwidth.

Indeed, any digital to analog optical encoding/decoding would involve TTL, by definition so there’s at least a high correlation between speed of electronic switching/processing and bandwidth. But number of transistors (as the chart shows) and optical bandwidth doesn’t seem to make as much sense probably makes the correlation evident. With the possible exception that processing speed is highly correlated with transistor counts these days.

But seeing the chart above shows that optical bandwidth and transistor counts are following each very closely.

~~~~

So, we all thought 100Gbps was great, 200Gbps was extraordinary and anything over that was wishful thinking. With, 400Gbps, 800 Gbps and 1.6Tbps all rolling out soon, data center transmission bottlenecks will become a thing in the past.

Picture Credit(s):

Using jell-o (hydrogel) for new form of photonics computing

Read an article the other day which blew me away, Researchers Create ” Intelligent interaction between light and meterial – New form of computing, which discussed the use of a hydrogel (like raspberry jell-o) that could be used both as a photonics switch for optical communications and as modifiable material to create photonics circuits. The research paper on the topic is also available on PNAS, Opto-chemical-mechanical transduction in photeresponsive gel elicits switchable self trapped beams with remote interactions.

Apparently researchers have created this gel (see B in the graphic above)which when exposed to laser light interacts to a) trap the beam within a narrow cylinder and or b) when exposed to parallel beams interact such that it boosts the intensity of one of the beams. They still have some work to show more interactions on laser beam(s) but the trapping of the laser beams is well documented in the PNAS paper.

Jell-o optical fibres

Most laser beams broaden as they travel through space, but when a laser beam ise sent through the new gel it becomes trapped in a narrow volume almost as if sent through a pipe.

The beam trading experiment using a hydrogel cube of ~4mm per side. They sent a focused laser beam with a ~20um diameter through an 4mm empty volume and measured the beam’s disbursement to be ~130um diameter. Then the did the same experiment only this time shining the laser beam through the hydrogel cube and over time (>50 seconds) the beam diameter narrows to becomes ~22um. In effect, the gel over time constructs (drills) a self-made optical fibre or cylindrical microscopic waveguide for the laser beam.

A similar process works with multiple laser beam going through the gel. More below on what happens with 2 parallel laser beams.

The PNAS article has a couple of movies showing the effect from the side of the hydrogel. with a single and multiple laser beams.

Apparently as the beam propagates through the hydrogel, it alters the optical-mechanical properties of the material such that the refractive index within the beam diameter is better than outside the beam diameter. Over time, as this material change takes place, the beam diameter narrows back down to almost the size of the incoming beam. They call any material like this that changes its refractive index as chromophores.

It appears that the self-trapping effectiveness is a function of the beam intensity. That is higher intensity incoming laser beams (6.0W in C above) cause the exit beam to narrow while lower (0.37W) intensity incoming laser beams don’t narrow as much.

This self-created optical wave-guide (fibre) through the gel can be reset or reversed (> 45 times) by turning off the laser and leaving the gel in darkness for a time (200 seconds or so). This allows the material to be re-used multiple times to create other optical channels or to create the same one over and over again.

Jell-o optical circuits

It turns out that by illuminating two laser beams in parallel their distances apart can change their interaction even though they don’t cross.

When the two beams are around 200um apart, the two beams self channel to about the size of ~40um (incoming beams at ~20um). But the intensity of the two beams are not the same at the exit as they were at the entrance to the gel. One beam intensity is boosted by a factor of 12 or so and the other is boosted by a factor of 9 providing an asymmetric intensity boost. Unclear how the higher intensity beam is selected but if I read the charts right the more intensely boosted beam is turned on after the the less intensely boosted beam (so 2nd one in gets the higher boost.

When one of the beams is disabled (turned off/blocked), the intensity of the remaining beam is boosted on the order of 20X. This boosting effect can be reversed by illuminating (turning back on/unblocking) the blocked laser. But, oddly the asymmetric boosting, is no longer present after this point. The process seemingly can revert back to the 20X intensity boost, just by disabling the other laser beam again. .

When the two beam are within 25 um of each other, the two beams emerge with the same (or close to similar) intensity (symmetric boosting), and as you block one beam the other increases in intensity but not as much as the farther apart beams (only 9X).

How to use this effect to create an optical circuit is beyond me but they haven’t documented any experiments where the beams collide or are close together but at 90-180 degrees from one another. And what happens when a 3rd beam is introduced? So there’s much room for more discovery.

~~~~

Just in case you want to try this at home. Here is the description of how to make the gel from the PNAS article: “The polymerizable hydrogel matrix was prepared by dissolving acrylamide:acrylic acid or acrylamide:2-hydroxyethyl methacrylate (HEMA) in a mixture of dimethyl sulfoxide (DMSO):deionized water before addition of the cross-linker. Acrylated SP (for tethered samples) or hydroxyl-substituted SP was then added to the unpolymerized hydrogel matrix followed by an addition of a catalyst. Hydrogel samples were cured in a circular plastic mold (d = 10 mm, h = 4 mm thick).

How long it will take to get the gel from the lab to your computer is anyones guess. It seems to me they have quite a ways to go to be able to simulate “nor” or “nand” universal logic gates widely used in to create electronic circuits today.

On the other hand, using the gel in optical communications may come earlier. Having a self trapping optical channel seems useful for a number of applications. And the intensity boosting effect would seem to provide an all optical amplifier.

I see two problems:

  1. The time it takes to get to a self trapping channel, 50sec is long and it will probably take longer as you increase the size of the material.
  2. The size of the material seems large for optical (or electronic) circuitry. 4mm may not be much but it’s astronomical compared to the nm used in electronic circuitry

The size may not be a real concern as the movies don’t seem to show that the beam once trapped changes across the material, so maybe it could be a 1mm, or 1um cube of material that’s used instead. The time is a more significant problem. But then again there may be another gel recipe that acts quicker. But from 50sec down to something like 50nsec is nine orders of magnitude. So there’s a lot of work here.

Comments?

Photo Credit(s): all charts are from the PNAS article, Opto-chemo-mechanical transduction in photo responsive gel…

Made in space

Read an article in IEEE Spectrum recently titled, 4 Products it makes sense to manufacture in space. The 4 products identified in the article include:

1) Metal alloys – because of micro-gravity, the mixture of metals that go into metal alloys should be much more even and as a result, should create a purer mixture of the metal alloy at the end of the process.

2) Fibre optical cables – the article says, ZBLAN, which is a heavy-metal fluoride glass fibre could have 1/10th the signal loss of current cable but is hard to manufacture on earth due to micro-crystal formation. Apparently, when manufactured (mixed-drawn) in micro-gravity, there’s less of this defect in the glass.

3) Printed human organs – the problem with printing biological organs, hearts, lungs, livers, etc. is they require scaffolding for the cells to adhere to that needs to be bio-degradeable and in the form of whatever organ is needed. However, in micro-gravity there should be less of a need for any scaffolding.

4) Artificial meat – similar to human organs above, by being able to build (3D print) biological products, one could create a steak or other cuts of meat that biological #D printing.

Problems with space manufacture

One problem with manufacturing metal alloys and fibre optic cable in space, is the immense heat required. Glass melts at 1400C, metals anywhere from 650C to 3400C. Getting rid of all that heat in space could present a significant problem. Not to mention the vessels required to hold molten materials weigh a lot.

And metal and glass manufacturing processes can also create waste, such as hot metal/glass particulates that settle on the floor on earth, but who knows where in space. To manufacture metal or glass on ISS would require a very heat tolerant, protected environment or capsule, lots of power to provide heat and radiator surfaces to release said heat.

And of course, delivering raw materials for metals and glass to space (LEO) would cost a lot (SpaceX $2.7K/kg , Atlas V $13.2K/kg). As such, the business case for metal alloy manufacturing in space doesn’t appear positive.

But given the reduced product weight and potentially higher prices one can charge for the product, fibre pptical glass may make business sense. Especially, if you could get by with 1/10th the glass because it has 1/10th the signal loss.

And if you don’t have to ship raw materials from earth (using the moon or asteroids instead), it would improvesboth business cases. That is, assuming raw material discovery and shipping costs are 1/6th or less as much as shipping from earth.

As for organs, as they can’t be manufactured on earth (yet), it could be the “killer app’ for made in space. But it’s sort of a race against time. Doing this in space may be a lot easier today but more research is going on to create organs on earth than in space. But eventually, manufacturing these on earth could be a lot cheaper and just as effective.

But I don’t see a business case for meat in space unless it’s to support making food for astronauts on ISS. Even then, it might be cheaper to just ship them some steak.

Products hard to make in space

I would think anything that doesn’t require gravity to work, should be easier to produce in space.

But that eliminates distillation, e.g., fossil fuel refining, fermentation, and many other chemical distillation processes (see Wikipedia article on Distillation).

But gravity is also used in depositing and holding multiple layers onto one another. So manufacturing paper, magnetic/optical disk platters, magnetic tapes, or any other product built up layer by layer, may not be suitable for space manufacture.

Not sure about semiconductors, as deposition steps can make use of chemical vapors. And that seems to require gravity. But it’s conceivable that in the absence of gravity, chemicals may still adhere to the wafer surface, as it’s an easier location to combine with than other surfaces in the chamber. On the other hand, they may just as likely retain their mixture in the vapor.

Growing extremely pure silicon ingots may be something better done in space. However, it may suffer from the same problems as metal alloy manufacturing. Given the need for extreme purity and the price paid for pure silicon, I would think this would be something to research ahead of metal alloys.

For further research

But in the end, if and when we become a space fairing people, we will need to manufacture everything in space. As well as grow or find raw materials easier than shipping them from the earth.

So, some research ought to be directed on how to perform distillation and multi-layer product manufacturing in space/micro-gravity. Such processes could potentially be done in a centrifuge, if they truly can’t be gone without gravity.

It’s also unclear how to boil any liquid in 0g or micro-g without convection (see Bizarre Boiling NASA Science article). According to the article, it creates one big bubble that stays where it is formed. Providing some way to extract this bubble in place would seem difficult. Boiling liquids in a centrifuge may work.

In any case, I’m sure the ISS crew would be more than happy to do any research necessary to figure out how to brew beer, let alone, distill vodka in space.

Picture Credit(s):

Polarized laser light speeds up data center networks

binary data flow

Read an article the other day, Polarizing the data center from IEEE Spectrum, on new optical technology that has the potential to boost data center networking speeds by ~7x beyond what it is today. The research was released in a Nature article, Ultrafast spin lasers (paywall) but a previous version of the paper was released on PLOS (Ultrafast spin lasers) was freely available

It’s still in lab demonstration at this point, but if it does make into the data center, it has the potential to remove local networking as a bottleneck for application workloads, at least for the foreseeable future.

The new technology is based on polarizing (right or left circular) laser light and using plolarization to encode ones and zeros. Today’s optical transceivers seem to use on-off or brightness level to encode data signals, which requires a lot of power (and by definition cooling) to work. On the other hand, polarizing laser light takes ~7% of the power (and cooling), then the old style of on and off laser light. 

How it works

Not sure I understand all the physics but it appears that if you are able to control the carrier spin within a semiconductor, Vertical-Cavity Surface-Emitting Laser (VCSEL), it transmutes carrier spin into photon polarization, and by doing so, emits polarized laser light. And with appropriate sensors, this laser light polarization can be detected and decoded. 

In addition, due to some physical constraints, modulating (encoding) laser intensity will never be faster than modulating (encoding) carrier spin. This has something to do with cycling the laser on and off vs, the polarization process. As such, one should be able to can transmit more information by polarized laser light than by intensified laser light.

Moreover, polarization can be done at room temperature. Apparently, VCSELs operating today typically hit 70C in normal high speed operations, vs. ~21C for VCSELs using polarization.

Lab results

In the lab they are using (I believe) mechanical bending in combination with a pulsed laser to create the spin carriers in the VCSEL’s that polarize the laser light. This is just used for demonstrating purposes. Unclear whether this approach will be useable in a data center application of the technology.

In their lab experiments they were able to demonstrate VCSEL polarization cycles (how quickly they could change polarization) in the 5 ps (pico-second, trillionths of a second) range. This resulted in transmitting something on the order of 214Ghz of polarized light cycles. Somewhere in the PLoS article they mentioned transmitting a random bit string using the technology and not just cycling through 1s and 0s over and over again.

The researchers believe that by moving from mechanical bending, to the use of a photonic crystal or strained quantum well-based VCSELs will allow them to move from signaling at 214Ghz to 1Thz, or ~28X what can be done with laser intensity signaling today. 

I don’t know whether the technology will get out of the lab anytime soon but 1Thz  (~1Tbps) seems something most IT organizations would want, especially if the price is right is similar to today’s technology.

The research mentioned this would be more suitable for data center networking rather than long range data transfers. Not sure why but it could be because 1) it’s still relatively experimental and 2) they have yet to determine distance degradation parameters.

Of course normal (on-off) signaling technology using VCSELs is not standing still. There’s always a potential for moving beyond any current physical constraints to boost some technologies capabilities. Just witness the superparamagnetic barrier in magnetic disk over the years. That physical barrier has moved multiple times during my career.

However, a nearly order of magnitude of speed and more than an order of magnitude of power/cooling improvements are hard to come by with mature technology. I see a polarized optical fiber networking in data centers of the future.

~~~~

Comments?

Photo Credit(s):

IT in space

Read an article last week about all the startup activity that’s taking place in space systems and infrastructure (see: As rocket companies proliferate … new tech emerges leading to a new space race). This is a consequence of cheap(er) launch systems from SpaceX, Blue Origin, Rocket Lab and others.

SpaceBelt, storage in space

One startup that caught my eye was SpaceBelt from Cloud Constellation Corporation, that’s planning to put PB (4X library of congress) of data storage in a constellation of LEO satellites.

The LEO storage pool will be populated by multiple nodes (satellites) with a set of geo-synchronous access points to the LEO storage pool. Customers use ground based secure terminals to talk with geosynchronous access satellites which communicate to the LEO storage nodes to access data.

Their main selling points appear to be data security and availability. The only way to access the data is through secured satellite downlinks/uplinks and then you only get to the geo-synchronous satellites. From there, those satellites access the LEO storage cloud directly. Customers can’t access the storage cloud without going through the geo-synchronous layer first and the secured terminals.

The problem with terrestrial data is that it is prone to security threats as well as natural disasters which take out a data center or a region. But with all your data residing in a space cloud, such concerns shouldn’t be a problem. (However, gaining access to your ground stations is a whole different story.

AWS and Lockheed-Martin supply new ground station service

The other company of interest is not a startup but a link up between Amazon and Lockheed Martin (see: Amazon-Lockheed Martin …) that supplies a new cloud based, satellite ground station as a service offering. The new service will use Lockheed Martin ground stations.

Currently, the service is limited to S-Band and attennas located in Denver, but plans are to expand to X-Band and locations throughout the world. The plan is to have ground stations located close to AWS data centers, so data center customers can have high speed, access to satellite data.

There are other startups in the ground station as a service space, but none with the resources of Amazon-Lockheed. All of this competition is just getting off the ground, but a few have been leasing idle ground station resources to customers. The AWS service already has a few big customers, like DigitalGlobe.

One thing we have learned, is that the appeal of cloud services is as much about the ecosystem that surrounds it, as the service offering itself. So having satellite ground stations as a service is good, but having these services, tied directly into other public cloud computing infrastructure, is much much better. Google, Microsoft, IBM are you listening?

Data centers in space

Why stop at storage? Wouldn’t it be better to support both storage and computation in space. That way access latencies wouldn’t be a concern. When terrestrial disasters occur, it’s not just data at risk. Ditto, for security threats.

Having whole data centers, would represent a whole new stratum of cloud computing. Also, now IT could implement space native applications.

If Microsoft can run a data center under the oceans, I see no reason they couldn’t do so in orbit. Especially when human flight returns to NASA/SpaceX. Just imagine admins and service techs as astronauts.

And yet, security and availability aren’t the only threats one has to deal with. What happens to the space cloud when war breaks out and satellite killers are set loose.

Yes, space infrastructure is not subject to terrestrial disasters or internet based security risks, but there are other problems besides those and war that exist such as solar storms and space debris clouds. .

In the end, it’s important to have multiple, non-overlapping risk profiles for your IT infrastructure. That is each IT deployment, may be subject to one set of risks but those sets are disjoint with another IT deployment option. IT in space, that is subject to solar storms, space debris, and satellite killers is a nice complement to terrestrial cloud data centers, subject to natural disasters, internet security risks, and other earth-based, man made disasters.

On the other hand, a large, solar storm like the 1859 one, could knock every data system on the world or in orbit, out. As for under the sea, it probably depends on how deep it was submerged!!

Photo Credit(s): Screen shots from SpaceBelt youtube video (c) SpaceBelt

Screens shot from AWS Ground Station as a Service sign up page (c) Amazon-Lockheed

Screen shots from Microsoft’s Under the sea news feature (c) Microsoft

Photonic or Optical FPGAs on the horizon

Read an article this past week (Toward an optical FPGA – programable silicon photonics circuits) on a new technology that could underpin optical  FPGAs. The technology is based on implantable wave guides and uses silicon on insulator technology which is compatible with current chip fabrication.

How does the Optical FPGA work

Their Optical FPGA is based on an eraseable direct coupler (DC) built using GE (Germanium) ion implantation. A DC is used when two optical waveguides are placed close enough together such that optical energy (photons) on one wave guide is switched over to the other, nearby wave guide.

As can be seen in the figure, the red (eraseable, implantable) and blue (conventional) wave guides are fabricated on the FPGA. The red wave guide performs the function of DC between the two conventional wave guides. The diagram shows both a single stage and a dual stage DC.

By using imlantable (eraseable) DCs, one can change the path of a photonic circuit by just erasing the implantable wave guide(s).

The GE ion implantable wave guides are erased by passing a laser over it and thus annealing (melting) it.

Once erased, the implantable wave guide DC no longer works. The chart on the left of the figure above shows how long the implantable wave guide needs to be to work. As shown above once erased to be shorter than 4-5µm, it no longer acts as a DC.

It’s not clear how one directs the laser to the proper place on the Optical FPGA to anneal the implantable wave guide but that’s a question of servos and mirrors.

Previous attempts at optical FPGAs, required applying continuing voltage to maintain the switched photonics circuits. Once voltage was withdrawn the photonics reverted back to original configuration.

But once an implantable wave guide is erased (annealed) in their approach, the changes to the Optical FPGA are permanent.

FPGAs today

Electronic FPGAs have never gone out of favor with customers doing hardware innovation. By supplying Optical FPGAs, the techniques in the paper would allow for much more photonics innovation as well.

Optics are primarily used in communications and storage (CD-DVDs) today. But quantum computing could potentially use photonics and there’s been talk of a 100% optical computer for a long time. As more and more photonics circuitry comes online, the need for an optical FPGA grows. The fact that it’s able to be grown on today’s fab lines makes it even more appealing.

But an FPGA is more than just directional control over (electronic or photonic) energy. One needs to have other circuitry in place on the FPGA for it to do work.

For example, if this were an electronic FPGA, gates, adders, muxes, etc. would all be somewhere on the FPGA

However, once having placed additional optical componentry on the FPGA, photonic directional control would be the glue that makes the Optical FPGA programmable.

Comments?

Photo Credit(s): All photos from Toward an optical FPGA – programable silicon photonics circuits paper

 

A “few exabytes-a-day” from SKA

A number of radio telescopes, positioned close together pointed at a cloudy sky
VLA by C. G. P. Grey (cc) (from Flickr)

ArsTechnica reported today on the proposed Square Kilometer Array (SKA) radio telescope and it’s data requirements. IBM is in collaboration with the Netherlands Institute for Radio Astronomy (ASTRON) to help develop the SKA called the DOME project.

When completed in ~2024, the SKA will generate over an exabyte a day (10**18) of raw data.  I reported in a previous post how the world was generating an exabyte-a-day, but that was way back in 2009.

What is the SKA?

The new SKA telescope will be a configuration of “millions of radio telescopes” which when combined together will create a telescope with an aperture of one square kilometer, which is no small feet.  They hope that the telescope will be able to shed some light on galaxy evolution, cosmology and dark energy.  But it will go beyond that to investigating “strong-field tests of gravity“, “origins and evolution of cosmic magnetism” and search for life on other planets.

But the interesting part from a storage perspective is that the SKA will be generating a “few exabytes a day” of radio telescopic data for every full day of operation.   Apparently the new radio telescopes will make use of a new, more sensitive detector able to generate data of up to 10GB/second.

How much data, really?

The team projects final storage needs at between 300 to 1500 PB per year. This compares to the LHC at CERN which consumes ~15PB of storage per year.

It would seem that the immediate data download would be the few exabytes and then it would be post- or inline-processed into something more mangeable and store-able.  Unless they have some hellaciously fast processing, I am hard pressed to believe this could all happen inline.  But then they would need at least another “few exabytes” of storage to buffer the data feed before processing.

I guess that’s why it’s still a research project.  Presumably, this also says that the telescope won’t be in full operation every day of the year, at least at first.

The IBM-ASTRON DOME collaboration project

The joint research project was named for the structure that covers a major telescope and for a famous Swiss mountain.  Focus areas for the IBM-ASTRON DOME project include:

  • Advanced high performance computing utilizing 3D chip stacks for better energy efficiency
  • Optical interconnects with nanophotonics for high-speed data transfer
  • Storage for both high access performance access and for dense/energy efficient data storage.

In this last focus area, IBM is considering the use of phase change memories (PCM) for high access performance and new generation tape for dense/efficient storage.  We have discussed PCM before in a previous post as an alternative to NAND based storage today (see Graphene Flash Memory).  But IBM has also been investigating MRAM based race track memory as a potential future storage technology.  I would guess the advantage of PCM over MRAM might be access speed.

As for tape, IBM has already demonstrated in their labs technologies for a 35TB tape. However storing 1500 PB would take over 40K tapes per year so they may need another even higher capacities to support SKA tape data needs.

Of course new optical interconnects will be needed to move this much data around from telescope to data center and beyond.  It’s likely that the nanophotonics will play some part as an all optical network for transceivers, amplifiers, and other networking switching gear.

The 3D chip stacks have the advantage of decreasing chip IO and more dense packing of components will make efficient use of board space.  But how these help with energy efficiency is another question.  The team projects very high energy and cooling requirements for their exascale high performance computing complex.

If this is anything like CERN, datasets gathered onsite are initially processed then replicated for finer processing elsewhere (see 15PB a year created by CERN post.  But moving PBs around like SKA will require is way beyond today’s Internet infrastructure.

~~~~

Big science like this gives a whole new meaning to BIGData. Glad I am in the storage business.  Now just what exactly is nanophotonics, mems based phote-electronics?