New era of graphical AI is near #AIFD2 @Intel

I attended AIFD2 ( videos of their sessions available here) a couple of weeks back and for the last session, Intel presented information on what they had been working on for new graphical optimized cores and a partner they have, called Katana Graph, which supports a highly optimized graphical analytics processing tool set using latest generation Xeon compute and Optane PMEM.

What’s so special about graphs

The challenges with graphical processing is that it’s nothing like standard 2D tables/images or 3D oriented data sets. It’s essentially a non-Euclidean data space that has nodes with edges that connect them.

But graphs are everywhere we look today, for instance, “friend” connection graphs, “terrorist” networks, page rank algorithms, drug impacts on biochemical pathways, cut points (single points of failure in networks or electrical grids), and of course optimized routing.

The challenge is that large graphs aren’t easily processed with standard scale up or scale out architectures. Part of this is that graphs are very sparse, one node could point to one other node or to millions. Due to this sparsity, standard data caching fetch logic (such as fetching everything adjacent to a memory request) and standardized vector processing (same instructions applied to data in sequence) don’t work very well at all. Also standard compute branch prediction logic doesn’t work. (Not sure why but apparently branching for graph processing depends more on data at the node or in the edge connecting nodes).

Intel talked about a new compute core they’ve been working on, which was was in response to a DARPA funded activity to speed up graphical processing and activities 1000X over current CPU/GPU hardware capabilities.

Intel presented on their PIUMA core technology was also described in a 2020 research paper (Programmable Integrated and Unified Memory Architecture) and YouTube video (Programmable Unified Memory Architecture).

Intel’s PIUMA Technology

DARPA’s goals became public in 2017 and described their Hierarchical Identity Verify Exploit (HIVE) architecture. HIVE is DOD’s description of a graphical analytics processor and is a multi-institutional initiative to speed up graphical processing. .

Intel PIUMA cores come with a multitude of 64-bit RISC processor pipelines with a global (shared) address space, memory and network interfaces that are optimized for 8 byte data transfers, a (globally addressed) scratchpad memory and an offload engine for common operations like scatter/gather memory access.

Each multi-thread PIUMA core has a set of instruction caches, small data caches and register files to support each thread (pipeline) in execution. And a PIUMA core has a number of multi-thread cores that are connected together.

PIUMA cores are optimized for TTEPS (Tera-Traversed Edges Per Second) and attempt to balance IO, memory and compute for graphical activities. PIUMA multi-thread cores are tied together into (completely connected) clique into a tile, multiple tiles are connected within a single node and multiple nodes are tied together with a 8 byte transfer optimized network into a PIUMA system.

P[I]UMA (labeled PUMA in the video) multi-thread cores apparently eschew extensive data and instruction caching to focus on creating a large number of relatively simple cores, that can process a multitude of threads at the same time. Most of these threads will be waiting on memory, so the more threads executing, the less likely that whole pipeline will need to be idle, and hopefully the more processing speedup can result.

Performance of P[I]UMA architecture vs. a standard Xeon compute architecture on graphical analytics and other graph oriented tasks were simulated with some results presented below.

Simulated speedup for a single node with P[I]UMAtechnology vs. Xeon range anywhere from 3.1x to 279x and depends on the amount of computation required at each node (or edge). (Intel saw no speedups between a single Xeon node and multiple Xeon Nodes, so the speedup results for 16 P[I]UMA nodes was 16X a single P[I]UMA node).

Having a global address space across all PIUMA nodes in a system is pretty impressive. We guess this is intrinsic to their (large) graph processing performance and is dependent on their use of photonics HyperX networking between nodes for low latency, small (8 byte) data access.

Katana Graph software

Another part of Intel’s session at AIFD2 was on their partnership with Katana Graph, a scale out graph analytics software provider. Katana Graph can take advantage of ubiquitous Xeon compute and Optane PMEM to speed up and scale-out graph processing. Katana Graph uses Intel’s oneAPI.

Katana graph is architected to support some of the largest graphs around. They tested it with the WDC12 web data commons 2012 page crawl with 3.5B nodes (pages) and 128B connections (links) between nodes.

Katana runs on AWS, Azure, GCP hyperscaler environment as well as on prem and can scale up to 256 systems.

Katana Graph performance results for Graph Neural Networks (GNNs) is shown below. GNNs are similar to AI/ML/DL CNNs but use graphical data rather than images. One can take a graph and reduce (convolute) and summarize segments to classify them. Moreover, GNNs can be used to understand whether two nodes are connected and whether two (sub)graphs are equivalent/similar.

In addition to GNNs, Katana Graph supports Graph Transformer Networks (GTNs) which can analyze meta paths within a larger, heterogeneous graph. The challenge with large graphs (say friend/terrorist networks) is that there are a large number of distinct sub-graphs within the graph. GTNs can break heterogenous graphs into sub- or meta-graphs, which can then be used to understand these relationships at smaller scales.

At AIFD2, Intel also presented an update on their Analytics Zoo, which is Intel’s MLops framework. But that will need to wait for another time.

~~~~

It was sort of a revelation to me that graphical data was not amenable to normal compute core processing using today’s GPUs or CPUs. DARPA (and Intel) saw this defect as a need for a completely different, brand new compute architecture.

Even so, Intel’s partnership with Katana Graph says that even today compute environment could provide higher performance on graphical data with suitable optimizations.

It would be interesting to see what Katana Graph could do using PIUMA technology and appropriate optimizations.

In any case, we shouldn’t need to wait long, Intel indicated in the video that P[I]UMA Technology chips could be here within the next year or so.

Comments?

Photo Credit(s):

  • From Intel’s AIFD2 presentations
  • From Intel’s PUMA you tube video

Is hardware innovation accelerating – hardware vs. software innovation (round 6)

There’s something happening to the IT industry, that maybe has not happened in a couple of decades or so but hardware innovation is back. We’ve been covering bits and pieces of it in our hardware vs software innovation series (see Open source ASiCs – HW vs. SW innovation [round 5] post).

But first please take our new poll:

Hardware innovation never really went away, Intel, AMD, Apple and others had always worked on new compute chips. DRAM and NAND also have taken giant leaps over the last two decades. These were all major hardware suppliers. But special purpose chips, non CPU compute engines, and hardware accelerators had been relegated to the dustbins of history as the CPU giants kept assimilating their functionality into the next round of CPU chips.

And then something happened. It kind of made sense for GPUs to be their own electronics as these were SIMD architectures intrinsically different than SISD, standard von Neumann X86 and ARM CPUs architectures

But for some reason it didn’t stop there. We first started seeing some inklings of new hardware innovation in the AI space with a number of special purpose DL NN accelerators coming online over the last 5 years or so (see Google TPU, SC20-Cerebras, GraphCore GC2 IPU chip, AI at the Edge Mythic and Syntiants IPU chips, and neuromorphic chips from BrainChip, Intel, IBM , others). Again, one could look at these as taking the SIMD model of GPUs into a slightly different direction. It’s probably one reason that GPUs were so useful for AI-ML-DL but further accelerations were now possible.

But it hasn’t stopped there either. In the last year or so we have seen SPUs (Nebulon Storage), DPUs (Fungible, NVIDIA Networking, others), and computational storage (NGD Systems, ScaleFlux Storage, others) all come online and become available to the enterprise. And most of these are for more normal workload environments, i.e., not AI-ML-DL workloads,

I thought at first these were just FPGAs implementing different logic but now I understand that many of these include ASICs as well. Most of these incorporate a standard von Neumann CPU (mostly ARM) along with special purpose hardware to speed up certain types of processing (such as low latency data transfer, encryption, compression, etc.).

What happened?

It’s pretty easy to understand why non-von Neumann computing architectures should come about. Witness all those new AI-ML-DL chips that have become available. And why these would be implemented outside the normal X86-ARM CPU environment.

But SPU, DPUs and computational storage, all have typical von Neumann CPUs (mostly ARM) as well as other special purpose logic on them.

Why?

I believe there are a few reasons, but the main two are that Moore’s law (every 2 years halving the size of transistors, effectively doubling transistor counts in same area) is slowing down and Dennard scaling (as you reduce the size of transistors their power consumption goes down and speed goes up) has stopped almost. Both of these have caused major CPU chip manufacturers to focus on adding cores to boost performance rather than just adding more transistors to the same core to increase functionality.

This hasn’t stopped adding instruction functionality to each CPU, but it has slowed considerably. And single (core) processor speeds (GHz) have reached a plateau.

But what it has stopped is having the real estate available on a CPU chip to absorb lots of additional hardware functionality. Which had been the case since the 1980’s.

I was talking with a friend who used to work on math co-processors, like the 8087, 80287, & 80387 that performed floating point arithmetic. But after the 486, floating point logic was completely integrated into the CPU chip itself, killing off the co-processors business.

Hardware design is getting easier & chip fabrication is becoming a commodity

We wrote a post a couple of weeks back talking about an open foundry (see HW vs. SW innovation round 5 noted above)that would take a hardware design and manufacture the ASICs for you for free (or at little cost). This says that the tool chain to perform chip design is becoming more standardized and much less complex. Does this mean that it takes less than 18 months to create an ASIC. I don’t know but it seems so.

But the real interesting aspect of this is that world class foundries are now available outside the major CPU developers. And these foundries, for a fair but high price, would be glad to fabricate a 1000 or million chips for you.

Yes your basic state of the art fab probably costs $12B plus these days. But all that has meant is that A) they will take any chip design and manufacture it, B) they need to keep the factory volume up by manufacturing chips in order to amortize the FAB’s high price and C) they have to keep their technology competitive or chip manufacturing will go elsewhere.

So chip fabrication is not quite a commodity. But there’s enough state of the art FABs in existence to make it seem so.

But it’s also physics

The extremely low latencies that are available with NVMe storage and, higher speed networking (100GbE & above) are demanding a lot more processing power to keep up with. And just the physics of how long it takes to transfer data across a distance (aka racks) is starting to consume too much overhead and impacting other work that could be done.

When we start measuring IO latencies in under 50 microseconds, there’s just not a lot of CPU instructions and task switching that can go on anymore. Yes, you could devote a whole core or two to this process and keep up with it. But wouldn’t the data center be better served keeping that core busy with normal work and offloading that low-latency, realtime (like) work to a hardware accelerator that could be executing on the network rather than behind a NIC.

So real time processing has become faster, or rather the amount of time to execute CPU instructions to switch tasks and to process data that needs to be done in realtime to keep up with faster line speed is becoming shorter.

So that explains DPUs, smart NICS, DPUs, & SPUs. What about the other hardware accelerator cards.

  • AI-ML-DL is becoming such an important and data AND compute intensive workload that just like GPUs before them, TPUs & IPUs are becoming a necessary evil if we want to service those workloads effectively and expeditiously.
  • Computational storage is becoming more wide spread because although data compression can be easily done at the CPU, it can be done faster (less data needs to be transferred back and forth) at the smart Drive.

My guess we haven’t seen the end of this at all. When you open up the possibility of having a long term business model, focused on hardware accelerators there would seem to be a lot of stuff that needs to be done and could be done faster and more effectively outside the core CPU.

There was a point over the last decade where software was destined to “eat the world”. I get a lot of flack for saying that was BS and that hardware innovation is really eating the world. Now that hardtware innovation’s back, it seems to be a little of both.

Comments?

Photo Credits:

Where should IoT data be processed – part 2

I wrote a post a while back on Where IOT data should be processed – part 1. We will get back to that post in a moment, but recently I read an article (How big data forced the hunt for ET intelligence to evolve) that mentioned after 20 years, they were shutting down SETI@home.

SETI@home was a crowdsourced computational network that took snippets of radio spectrum, sent them to 1000s of home computers to be analyzed during idle computer time, once processed the analysis was sent back to SETI@home. It was one of the first to use a crowdsourced approach to perform data processing. The data was collected at a radio telescope, sent to SETI@home and distributed from there.

6 Factors for IOT data processing

In my post I talked about 6 factors that should help determine where data is processed. Those 6 factors included

  • Data size which is a measure of the amount (GB, TB or PBs) of data that is being generated at an IOT node
  • Data pipe availability, which is all about the networking bandwidth that’s available at the IOT node. If we are talking some sort of low-bandwidth networking access then it probably makes sense to process the data more locally and send only results of processing up the stack.
  • Processing criticality which indicates how important is the processing of the data. If the processing could save a life then maybe it should be done as close as possible to where the data is generated. If the data processing is less critical it could perhaps be done at other nodes in an IOT network
  • Processing time and infrastructure cost which is all about what sort of computational resources are required to perform the processing and how much would it cost. If processing of the data is to undergo multiple passes or requires multi-core CPUs or GPUs, moving data off the IoT node and onto a more comprehensive server to process it, could make sense.
  • Compliance, governance and archive requirements, which discussed the potential need for all data to be available for regulatory audits and as such may need to be available at a central location anyway so why not perform processing there.
  • Data information funnel, which talked about the fact that an IoT network should be configured in layers and that each layer in the stack should probably be responsible for some portion of the data processing needed by the overall system, if nothing more than compressing the information before it is sent elsewhere.

Now that I review the list, the last, Data information funnel, factor really should be a function of the other factors rather than a separate factor.

In that blog post I promised to follow it up with some examples of the logic applied to real world problems. SETI is the first one I’ve seen in the literature

SETI’s IoT processing problem

Closeup front view of one antenna of the Allan Telescope Array, a radio telescope for combined radio astronomy and SETI (Search for Extraterrestrial Intelligence) research being built by the University of California at Berkeley, outside San Francisco. The first phase, consisting of 42 6 meter dish antennas like the one shown here, was completed in 2007. Eventually it will have 350 antennas. This type of antenna is called an offset Gregorian design. The incoming radio waves are reflected by the large parabolic dish onto a secondary concave parabolic reflector in front of the dish, and then into a feed horn. A metal shroud can be seen along the bottom of the secondary reflector which shields the antenna from ground noise. It covers the frequency range from 0.5 to 11.2 GHz.

The SETI researchers found that “The telescopes are now capable of producing so much data that it’s not possible to get that volume of data out to volunteers,” And “The discovery space is in these massive, massive data streams. And it’s just not efficient to distribute many terabits per second out to volunteers all over the world. It’s more efficient for that data processing to happen at the actual observatory.”

So they moved the data processing for the SETI IoT network from being distributed out to home computers throughout the world to being done at the (telescope) source where the data was originally generated.

This decision seems to rely on a couple of the factors above. Namely the pipe availability and data size factors. They had to move processing because no pipes existed to send Tb of data to 1000s of home computers. And finally, the processing time and infrastructure cost has come down so much, that it was just easier to do the processing onsite.

It doesn’t seem like processing criticality or compliance-governance-archive had any bearing on the decision.

So there’s the first example that seems to fit well into our data processing framework.

~~~~

We ought to be able to come up with a formula that uses all these factors and comes up to with a yes or no as to whether to process the data on the node or not.

Photo Credit(s)

Breaking optical data transmission speed records

Read an article this week about records being made in optical transmission speeds (see IEEE Spectrum, Optical labs set terabit records). Although these are all lab based records, the (data center) single mode optical transmission speed shown below is not far behind the single mode fibre speed commercially available today. But the multi-mode long haul (undersea transmission) speed record below will probably take a while longer until it’s ready for prime time.

First up, data center optical transmission speeds

Not sure what your data center transmission rates are but it seems pretty typical to see 100Gbps these days and inter switch at 200Gbps are commercially available. Last year at their annual Optical Fiber Communications (OFC) conference, the industry was releasing commercial availability of 400Gbps and pushing to achieve 800Gbps soon.

Since then, the researchers at Nokia Bell Labs have been able to transmit 1.52Tbps through a single mode fiber over 80 km distance. (Unclear, why a data center needs an 80km single mode fibre link but maybe this is more for a metro area than just a datacenter.

Diagram of a single mode (SM) optical fiber: 1.- Core 8-10 µm; 2.- Cladding 125 µm; 3.- Buffer 250 µm; & 4.- Jacket 400 µm

The key to transmitting data faster across single mode fibre, is how quickly one can encode/decode data (symbols) both on the digital to analog encoding (transmitting) end and the analog to digital decoding (receiving) end.

The team at Nokia used a new generation silicon-germanium chip (55nm CMOS process) able to generate 128 gigabaud symbol transmission (encoding/decoding) with 6.2 bits per symbol across single mode fiber.

Using optical erbium amplifiers, the team at Nokia was able to achieve 1.4Tbps over 240km of single mode fibre.

A wall-mount cabinet containing optical fiber interconnects. The yellow cables are single mode fibers; the orange and aqua cables are multi-mode fibers: 50/125 µm OM2 and 50/125 µm OM3 fibers respectively.

Used to be that transmitting data across single mode fibre was all about how quickly one could turn laser/light on and off. These days, with coherent transmission, data is being encoded/decoded in amplitude modulation, phase modulation and polarization (see Coherent data transmission defined article).

Nokia Lab’s is attempting to double the current 800Gbps data transmission speed or reach 1.6Tbps. At 1.52Tbps, they’re not far off that mark.

It’s somewhat surprising that optical single mode fibre technology is advancing so rapidly and yet, at the same time, commercially available technology is not that far behind.

Long haul optical transmission speed

Undersea or long haul optical transmission uses multi-core/mode fibre to transmit data across continents or an ocean. With multi-core/multi-mode fibre researchers and the Japan National Institute for Communications Technology (NICT) have demonstrated a 3 core, 125 micrometer wide long haul optical fibre transmission system that is able to transmit 172Tbps.

The new technology utilizes close-coupled multi-core fibre where signals in each individual core end up intentionally coupled with one another creating a sort of optical MIMO (Multi-input/Multi-output) transmission mechanism which can be disentangled with less complex electronics.

Although the technology is not ready for prime time, the closest competing technology is a 6-core fiber transmission cable which can transmit 144Tbps. Deployments of that cable are said to be starting soon.

Shouldn’t there be a Moore’s law for optical transmission speeds

Ran across this chart in a LightTalk Blog discussing how Moore’s law and optical transmission speeds are tracking one another. It seems to me that there’s a need for a Moore’s law for optical cable bandwidth. The blog post suggests that there’s a high correlation between Moore’s law and optical fiber bandwidth.

Indeed, any digital to analog optical encoding/decoding would involve TTL, by definition so there’s at least a high correlation between speed of electronic switching/processing and bandwidth. But number of transistors (as the chart shows) and optical bandwidth doesn’t seem to make as much sense probably makes the correlation evident. With the possible exception that processing speed is highly correlated with transistor counts these days.

But seeing the chart above shows that optical bandwidth and transistor counts are following each very closely.

~~~~

So, we all thought 100Gbps was great, 200Gbps was extraordinary and anything over that was wishful thinking. With, 400Gbps, 800 Gbps and 1.6Tbps all rolling out soon, data center transmission bottlenecks will become a thing in the past.

Picture Credit(s):

Anti-Gresham’s Law: Good information drives out bad

(Good information is in blue, bad information is in Red)

Read an article the other day in ScienceDaily (Faster way to replace bad info in networks) which discusses research published in a recent IEEE/ACM Transactions on Network journal (behind paywall). Luckily there was a pre-print available (Modeling and analysis of conflicting information propagation in a finite time horizon).

The article discusses information epidemics using the analogy of a virus and its antidote. This is where bad information (the virus) and good information (the antidote) circulate within a network of individuals (systems, friend networks, IOT networks, etc). Such bad information could be malware and its good information counterpart could be a system patch to fix the vulnerability. Another example would be an outright lie about some event and it’s counterpart could be the truth about the event.

The analysis in the paper makes some simplifying assumptions. That in a any single individual (network node), both the virus and the antidote cannot co-exist. That is either an individual (node) is infected by the virus or is cured by the antidote or is yet to be infected or cured.

The network is fully connected and complex. That is once an individual in a network is infected, unless an antidote is developed the infection proceeds to infect all individuals in the network. And once an antidote is created it will cure all individuals in a network over time. Some individuals in the network have more connections to other nodes in the network while different individuals have less connections to other nodes in the network.

The network functions in a bi-directional manner. That is any node, lets say RAY, can infect/cure any node it is connected to and conversely any node it is connected to can infect/cure the RAY node.

Gresham’s law, (see Wikipedia article) is a monetary principle which states bad money in circulation drives out good. Where bad money is money that is worth less than the commodity it is backed with and good money is money that’s worth more than the commodity it is backed with. In essence, good money is hoarded and people will preferentially use bad money.

My anti-Gresham’s law is that good information drives out bad. Where good information is the truth about an event, security patches, antidotes to infections, etc. and bad infrormation is falsehoods, malware, biological viruses., etc

The Susceptible Infected-Cured (SIC) model

The paper describes a SIC model that simulates the (virus and antidote) epidemic propagation process or the process whereby virus and its antidote propagates throughout a network. This assumes that once a network node is infected (at time0), during the next interval (time0+1) it infects it’s nearest neighbors (nodes that are directly connected to it) and they in turn infect their nearest neighbors during the following interval (time0+2), etc, until all nodes are infected. Similarly, once a network node is cured it will cure all it’s neighbor nodes during the next interval and these nodes will cure all of their neighbor nodes during the following interval, etc, until all nodes are cured.

What can the SIC model tell us

The model provides calculations to generate a number of statistics, such as half-life time of bad information and extinction time of bad-information. The paper discusses the SIC model across complex (irregular) network topologies as well as completely connected and star topologies and derives formulas for each type of network

In the discussion portion of the paper, the authors indicate that if you are interested in curing a population with bad information it’s best to map out the networks’ topology and focus your curation efforts on those node(s) that lie along the (most) shortest path(s) within a network.

I wrongly thought that the best way to cure a population of nodes would be to cure the nodes with the highest connectivity. While this may work and such nodes, are no doubt along at least one if not all, shortest paths, it may not be the optimum solution to reduce extinction time, especially If there are other nodes on more shortest paths in a network, target these nodes with a cure.

Applying the SIC model to COVID-19

It seems to me that if we were to model the physical social connectivity of individuals in a population (city, town, state, etc.). And we wanted to infect the highest portion of people in the shortest time we would target shortest path individuals to be infected first.

Conversely, if we wanted to slow down the infection rate of COVID-19, it would be extremely important to reduce the physical connectivity of indivduals on the shortest path in a population. Which is why social distancing, at least when broadly applied, works. It’s also why, when infected, self quarantining is the best policy. But if you wished to not apply social distancing in a broad way, perhaps targeting those individuals on the shortest path to practice social distancing could suffice.

However, there are at least two other approaches to using the SIC model to eradicate (extinguish the disease) the fastest:

  1. Now if we were able to produce an antidote, say a vaccine but one which had the property of being infectious (say a less potent strain of the COVID-19 virus). Then targeting this vaccine to those people on the shortest paths in a network would extinguish the pandemic in the shortest time. Please note, that to my knowledge, any vaccine (course), if successful, will eliminate a disease and provide antibodies for any future infections of that disease. So the time when a person is infected with a vaccine strain, is limited and would likely be much shorter than the time soemone is infected with the original disease. And most vaccines are likely to be a weakened version of an original disease may not be as infectious. So in the wild the vaccine and the original disease would compete to infect people.
  2. Another approach to using the SIC model and is to produce a normal (non-transmissible) vaccine and target vaccination to individuals on the shortest paths in a population network. As once vaccinated, these people would no longer be able to infect others and would block any infections to other individuals down network from them. One problem with this approach is if everyone is already infected. Vaccinating anyone will not slow down future infection rates.

There may be other approaches to using SIC to combat COVID-19 than the above but these seem most reasonable to me.

So, health organizations of the world, figure out your populations physical-social connectivity network (perhaps using mobile phone GPS information) and target any cure/vaccination to those individuals on the highest number of shortest paths through your network.

Comments?

Photo Credit(s):

  1. Figure 2 from the Modeling and analysis of conflicting information propagation in a finite time horizon article pre-print
  2. Figure 3 from the Modeling and analysis of conflicting information propagation in a finite time horizon article pre-print
  3. COVID-19 virus micrograph, from USA CDC.

Using jell-o (hydrogel) for new form of photonics computing

Read an article the other day which blew me away, Researchers Create ” Intelligent interaction between light and meterial – New form of computing, which discussed the use of a hydrogel (like raspberry jell-o) that could be used both as a photonics switch for optical communications and as modifiable material to create photonics circuits. The research paper on the topic is also available on PNAS, Opto-chemical-mechanical transduction in photeresponsive gel elicits switchable self trapped beams with remote interactions.

Apparently researchers have created this gel (see B in the graphic above)which when exposed to laser light interacts to a) trap the beam within a narrow cylinder and or b) when exposed to parallel beams interact such that it boosts the intensity of one of the beams. They still have some work to show more interactions on laser beam(s) but the trapping of the laser beams is well documented in the PNAS paper.

Jell-o optical fibres

Most laser beams broaden as they travel through space, but when a laser beam ise sent through the new gel it becomes trapped in a narrow volume almost as if sent through a pipe.

The beam trading experiment using a hydrogel cube of ~4mm per side. They sent a focused laser beam with a ~20um diameter through an 4mm empty volume and measured the beam’s disbursement to be ~130um diameter. Then the did the same experiment only this time shining the laser beam through the hydrogel cube and over time (>50 seconds) the beam diameter narrows to becomes ~22um. In effect, the gel over time constructs (drills) a self-made optical fibre or cylindrical microscopic waveguide for the laser beam.

A similar process works with multiple laser beam going through the gel. More below on what happens with 2 parallel laser beams.

The PNAS article has a couple of movies showing the effect from the side of the hydrogel. with a single and multiple laser beams.

Apparently as the beam propagates through the hydrogel, it alters the optical-mechanical properties of the material such that the refractive index within the beam diameter is better than outside the beam diameter. Over time, as this material change takes place, the beam diameter narrows back down to almost the size of the incoming beam. They call any material like this that changes its refractive index as chromophores.

It appears that the self-trapping effectiveness is a function of the beam intensity. That is higher intensity incoming laser beams (6.0W in C above) cause the exit beam to narrow while lower (0.37W) intensity incoming laser beams don’t narrow as much.

This self-created optical wave-guide (fibre) through the gel can be reset or reversed (> 45 times) by turning off the laser and leaving the gel in darkness for a time (200 seconds or so). This allows the material to be re-used multiple times to create other optical channels or to create the same one over and over again.

Jell-o optical circuits

It turns out that by illuminating two laser beams in parallel their distances apart can change their interaction even though they don’t cross.

When the two beams are around 200um apart, the two beams self channel to about the size of ~40um (incoming beams at ~20um). But the intensity of the two beams are not the same at the exit as they were at the entrance to the gel. One beam intensity is boosted by a factor of 12 or so and the other is boosted by a factor of 9 providing an asymmetric intensity boost. Unclear how the higher intensity beam is selected but if I read the charts right the more intensely boosted beam is turned on after the the less intensely boosted beam (so 2nd one in gets the higher boost.

When one of the beams is disabled (turned off/blocked), the intensity of the remaining beam is boosted on the order of 20X. This boosting effect can be reversed by illuminating (turning back on/unblocking) the blocked laser. But, oddly the asymmetric boosting, is no longer present after this point. The process seemingly can revert back to the 20X intensity boost, just by disabling the other laser beam again. .

When the two beam are within 25 um of each other, the two beams emerge with the same (or close to similar) intensity (symmetric boosting), and as you block one beam the other increases in intensity but not as much as the farther apart beams (only 9X).

How to use this effect to create an optical circuit is beyond me but they haven’t documented any experiments where the beams collide or are close together but at 90-180 degrees from one another. And what happens when a 3rd beam is introduced? So there’s much room for more discovery.

~~~~

Just in case you want to try this at home. Here is the description of how to make the gel from the PNAS article: “The polymerizable hydrogel matrix was prepared by dissolving acrylamide:acrylic acid or acrylamide:2-hydroxyethyl methacrylate (HEMA) in a mixture of dimethyl sulfoxide (DMSO):deionized water before addition of the cross-linker. Acrylated SP (for tethered samples) or hydroxyl-substituted SP was then added to the unpolymerized hydrogel matrix followed by an addition of a catalyst. Hydrogel samples were cured in a circular plastic mold (d = 10 mm, h = 4 mm thick).

How long it will take to get the gel from the lab to your computer is anyones guess. It seems to me they have quite a ways to go to be able to simulate “nor” or “nand” universal logic gates widely used in to create electronic circuits today.

On the other hand, using the gel in optical communications may come earlier. Having a self trapping optical channel seems useful for a number of applications. And the intensity boosting effect would seem to provide an all optical amplifier.

I see two problems:

  1. The time it takes to get to a self trapping channel, 50sec is long and it will probably take longer as you increase the size of the material.
  2. The size of the material seems large for optical (or electronic) circuitry. 4mm may not be much but it’s astronomical compared to the nm used in electronic circuitry

The size may not be a real concern as the movies don’t seem to show that the beam once trapped changes across the material, so maybe it could be a 1mm, or 1um cube of material that’s used instead. The time is a more significant problem. But then again there may be another gel recipe that acts quicker. But from 50sec down to something like 50nsec is nine orders of magnitude. So there’s a lot of work here.

Comments?

Photo Credit(s): all charts are from the PNAS article, Opto-chemo-mechanical transduction in photo responsive gel…

Earth globe within a locked cage

Breaking IoT security

Read an article the other day (Researchers exploit low entropy of IoT devices to break RSA certificates) about researchers cracking IoT device security and breaking their public key encryption keys. The report focused on PKI and RSA certificates and IoT devices. The article mentioned the research paper describing the attack in more detail.

safe 'n green by Robert S. Donovan (cc) (from flickr)
safe ‘n green by Robert S. Donovan (cc) (from flickr)

RSA certificates publish a public key and the digital signature of the certificate and identify the device that owns the certificate.

What the researchers were able to show was that ~250K keys in IoT device RSA certificates were insecure. They were able to compromise the 250K RSA certificates using a single Microsoft Azure VM and about $3K of computer time.

It turns out that if two RSA certificate public keys share the same factor, it’s much easier to determine the greatest common devisor GCD) of the two public keys than it is to factor any one of them. And once you have the GCD of the two keys, it’s relatively trivial to determine the other factor in a public key. And that’s just what they did.

Public key infrastructure (PKI) encryption depends on asymmetric cryptography using a “public” key to encrypt messages (or to encrypt a one time key to be used in later encryption of messages) and the use of a “private” key to decrypt the message (or keys) and sign digital certificates. There are certificate authorities and a number of other elements used in PKI but the asymmetric cryptography at its heart, rests on the foundation of the difficulty in factoring large numbers but those large numbers need to be random and prime.

True randomness is hard

Just some of the recently donated seeds that are being added to the Reading Food Growing Network seed swap boxes, including some Polish gherkin seeds.

The problem starts with generating truly random numbers in a digital computer. Digital algorithms typically depend on a computer to perform the some set of instructions, in the same way and sequence so as to get the same answer every time we run the algorithm.

But if you want random numbers this predictability of always coming up with the same answer each time results in non-random numbers (or rather random numbers that are the same each time you run the algorithm). So to get around this, most random number generators can make use of a (random) seed which is used as an input to the algorithm to generate random numbers.

However, this seed needs to be a random number. But to create a random number it needs to be generated not with instructions but using something outside the digital computer. One approach noted above is to use a human typing keys to generate a random number to be used as a seed.

The researchers exploited the fact that most IoT devices don’t use a random (enough) seed for their PKI key generation. And they were able to use the GCD trick to figure out the factors to the PKI.

But the lack of true randomness (or entropy) is the real problem. Somehow, these devices need to have a cheap and effective way to generate a random seed. Until this can be found, they will be subject to these sorts of attacks.

… but not impossible to obtain

I remember in times past when tasked to create a public key-private key pair I had to type some random characters. The Public key encryption algorithm used the inter-character time interval of my typing to generate a random seed that was then used to generate the key pair used in the public key. I believe the two keys also need to be prime numbers.

Earth globe within a locked cage

Perhaps a better approach would be to assign them keys from a centralized key distributor. That way the randomness could be controlled by the (key) distributor.

There are other approaches that depend on the sensors available to an IoT device. If the device has a camera or mic, taking raw data from the camera or sound sensor and doing a numerical transform on them may suffice. Strain gauges, liquid levels, temperature, humidity, wind speed, etc. all of these devices have something which senses the world around them and many of these are, at their base, analog sensors. Reading and converting some portion of these analog signals from raw analog to a digital random seed could be very effective way to generate true(r) randomness.

~~~~

The paper has much more information about the attack and their results if your interested. They said that ~50% of the compromised devices were from a large network supplier. Such suppliers probably also have a vast majority of devices deployed. Still it’s troubling, nonetheless.

Until changes are made to IoT devices, they will continue to be insecure. Not as much of a problem when they are read only sensors but when the information they sense is used by robots or other automation to make decisions about actions, then having insecure IoT becomes a safety issue.

This is not the first time such an attack was attempted and each time, it’s been very successful. That alone should be cause for alarm. But IoT and similar devices are hard to patch in the field and their continuing insecurity may be more of a result of the difficulty of updating a large install base than anything else.

Photo Credit(s):

Internet of Tires

Read an article a couple of weeks back (An internet of tires?… IEEE Spectrum) and can’t seem to get it out of my head. Pirelli, a European tire manufacturer was demonstrating a smart tire or as they call it, their new Cyber Tyre.

The Cyber Tyre includes accelerometer(s) in its rubber, that can be used to sense the pavement/road surface conditions. Cyber Tyre can communicate surface conditions to the car and using the car’s 5G, to other cars (of same make) to tell them of problems with surface adhesion (hydroplaning, ice, other traction issues).

Presumably the accelerometers in the Cyber Tyre measure acceleration changes of individual tires as they rotate. Any rapid acceleration change, could potentially be used to determine whether the car has lost traction due and why.

They tested the new tires out at a (1/3rd mile) test track on top of a Fiat factory, using Audi A8 automobiles and 5G. Unclear why this had to wait for 5G but it’s possible that using 5G, the Cyber Tyre and the car could possibly log and transmit such information back to the manufacturer of the car or tire.

Accelerometers have become dirt cheap over the last decade as smart phones have taken off. So, it was only a matter of time before they found use in new and interesting applications and the Cyber Tyre is just the latest.

Internet of Vehicles

Presumably the car, with Cyber Tyres on it, communicates road hazard information to other cars using 5G and vehicle to vehicle (V2V) communication protocols or perhaps to municipal or state authorities. This way highway signage could display hazardous conditions ahead.

Audi has a website devoted to Car to X communications which has embedded certain Audi vehicles (A4, A5 & Q7), with cellular communications, cameras and other sensors used to identify (recognize) signage, hazards, and other information and communicate this data to other Audi vehicles. This way owning an Audi, would plug you into this information flow.

Pirelli’s Cyber Car Concept

Prior to the Cyber Tyre, Pirelli introduced a Cyber Car concept that is supposedly rolling out this year. This version has tyres with real time pressure, temperature, (static) vertical load and a Tyre ID. Pirelli has been working with car manufacturers to roll out Cyber Car functionality.

The Tyre ID seems to be a file that can include anything that the tyre or automobile manufacturer wants. It sort of reminds me of a blockchain data blocks that could be used to validate tyre manufacturing provenance.

The vertical load sensor seems more important to car and tire manufacturers than consumers. But for electrical car owners, knowing car weight could help determine current battery load and thereby more precisely know how much charge is left in a battery.

Pirelli uses a proprietary algorithm to determine tread wear. This makes use of the other tyre sensors to predict wear and perhaps uses an AI DL algorithm to do this.

~~~

ABS has been around for decades now and tire pressure sensors for over 10 years or so. My latest car has enough sensors to pretty much drive itself on the highway but not quite park itself as of yet. So it was only a matter of time before something like smart tires would show up.

But given their integration with car electronics systems, it would seem that this would only make sense for new cars that included a full set of Cyber Tyres. That is until all tire AND car manufacturers agreed to come up with a standard protocol to communicate such information. When that happens, consumers could chose any tire manufacturer and obtain have similar if not the same functionality from them.

I suppose someone had to be first to identify just what could be done with the electronics available today. Pirelli just happens to be it for now in the tire industry.

I just don’t want to have to upgrade tires every 24 months. And, if I have to wait a long time for my car to boot up and establish communications with my tires, I may just take a (dumb) bike.

Photo Credit(s):