A new way to compute

I read an article the other day on using using random pulses rather than digital numbers to compute with, see Computing with random pulses promises to simplify circuitry and save power, in IEEE Spectrum. Essentially they encode a number as a probability in a random string of bits and then use simple logic to compute with. This approach was invented in the early days of digital logic and was called stochastic computing.

Stochastic numbers?

It’s pretty easy to understand how such logic can work for fractions. For example to represent 1/4, you would construct a bit stream that had one out of every four bits, on average, as a 1 and the rest 0’s. This could easily be a random string of bits which have an average of 1 out of every 4 bits as a one.

A nice result of such a numerical representation is that it easily results in more precision as you increase the length of the bit stream. The paper calls this progressive precision.

Progressive precision helps stochastic computing be more fault tolerant than standard digital logic. That is, if the string has one bit changed it’s not going to make that much of a difference from the original string and computing with an erroneous number like this will probably result in similar results to the correct number.  To have anything like this in digital computation requires parity bits, ECC, CRC and other error correction mechanisms and the logic required to implement these is extensive.

Stochastic computing

2 bit multiplier

Another advantage of stochastic computation and using a probability  rather than binary (or decimal) digital representation, is that most arithmetic functions are much simpler to implement.

 

They discuss two examples in the original paper:

  • AND gate

    Multiplication – Multiplying two probabilistic bit streams together is as simple as ANDing the two strings.

  • 2 input stream multiplexer

    Addition – Adding two probabilistic bit strings together just requires a multiplexer, but you end up with a bit string that is the sum of the two divided by two.

What about other numbers?

I see a couple of problems with stochastic computing:,

  • How do you represent  an irrational number, such as the square root of 2;
  • How do you represent integers or for that matter any value greater than 1.0 in a probabilistic bit stream; and
  • How do you represent negative values in a bit stream.

I suppose irrational numbers could be represented by taking a near-by, close approximation of the irrational number. For instance, using 1.4 for the square root of two, or 1.41, or 1.414, …. And this way you could get whatever (progressive) precision that was needed.

As for integers greater than 1.0, perhaps they could use a floating point representation, with two defined bit strings, one representing the mantissa (fractional part) and the other an exponent. We would assume that the exponent rather than being a probability from 0..1.0, would be inverted and represent 1.0…∞.

Negative numbers are a different problem. One way to supply negative numbers is to use something akin to complemetary representation. For example, rather than the probabilistic bit stream representing 0.0 to 1.0 have it represent -0.5 to 0.5. Then progressive precision would work for negative numbers as well a positive numbers.

One major downside to stochastic numbers and computation is that high precision arithmetic is very difficult to achieve.  To perform 32 bit precision arithmetic would require a bit streams that were  2³² bits long. 64 bit precision would require streams that were  2**64th bits long.

Good uses for stochastic computing

One advantage of simplified logic used in stochastic computing is it needs a lot less power to compute. One example in the paper they use for stochastic computers is as a retinal sensor for in the body visual augmentation. They developed a neural net that did edge detection that used a stochastic front end to simplify the logic and cut down on power requirements.

Other areas where stochastic computing might help is for IoT applications. There’s been a lot of interest in IoT sensors being embedded in streets, parking lots, buildings, bridges, trucks, cars etc. Most have a need to perform a modest amount of edge computing and then send information up to the cloud or some edge consolidator intermediate

Many of these embedded devices lack access to power, so they will need to make do with whatever they can find.  One approach is to siphon power from ambient radio (see this  Electricity harvesting… article), temperature differences (see this MIT … power from daily temperature swings article), footsteps (see Pavegen) or other mechanisms.

The other use for stochastic computing is to mimic the brain. It appears that the brain encodes information in pulses of electric potential. Computation in the brain happens across exhibitory and inhibitory circuits that all seem to interact together.  Stochastic computing might be an effective way, low power way to simulate the brain at a much finer granularity than what’s available today using standard digital computation.

~~~~

Not sure it’s all there yet, but there’s definitely some advantages to stochastic computing. I could see it being especially useful for in body sensors and many IoT devices.

Comments?

Photo Credit(s):  The logic of random pulses

2 bit by 2 bit multiplier, By Sodaboy1138 (talk) (Uploads) – Own work, CC BY-SA 3.0, wikimedia

AND ANSI Labelled, By Inductiveload – Own work, Public Domain, wikimedia

2 Input multiplexor

A battery free implantable neural sensor, MIT Technology Review article

Integrating neural signal and embedded system for controlling a small motor, an IntechOpen article

Extremely low power transistors open up new IoT applications

We have written before about the computational power efficiency law know as Koomley’s Law which states that the computations one can do with the same amount of energy has been doubling every 1.57 years (for more info, please see my No power sensors surface … post).

The dawn of sub-threshold electronics

But just this week there was another article this time about electronics that use much less power than normal transistors. Achieving this in Internet of Thing (IoT) type sensors would take the computations/joule up by a orders of magnitude, not just ~1.6X as in Koomley’s law, although how long it will take to come out commercially is another issue

This new technology is called sub-threshold transistors and they use much less power than normal transistors. The article in MIT Technical Review, A batteryless sensor chip for the IoT, discusses the phenomenon used by sub-threshold transistors that normal transistors, even when they are technically in the “off” state, leak some amount of current.  This CMOS transistor parasitic leakage had been considered a current drain that couldn’t be eliminated and as such, wasted energy up until recently.

Not so any longer, with the new sub-threshold transistor design paradigm, electronics  could now take advantage of this “leakage” current to perform actual computations. And that opens up a whole new level of IoT sensors that could be deployed.

Prototype sub-threshold circuits coming out

One company PsiKick is using this phenomenon to design ASIC/chips that, depending on the application, using sub-threshold transistors plus extensive power reduction design techniques, only use 0.1 to 1% of the energy of similar functioning chips. Their first prototype was a portable EKG that uses body heat to power itself with a thermo-electric generator rather than a battery.  The prototype was just a proof of concept but they seem to be at work trying to open the technology to broader applications.

One serious consideration limiting the types of sensors that could be deployed in IoT applications was how to get power to these sensors. The other thing was how to get information out of the sensor and out to the real world.  There are a few ways to attack the power issue for IoT sensors, creating more efficient electronics, more effective/long lasting batteries, and smaller electronic generators. Sub-threshold transistor electronics takes a major leap forward to more efficient electronics.

In my previous post we discussed ways to construct smaller electronic generators used by low-power systems/chips. One approach highlighted in that paper used small antennas to extract power from ambient radio waves. But that’s not the only way to generate small amounts of power. I have also heard of piezoelectric generators that use force and movement (such as foot falls) to generate energy. And of course, small solar panels could do the same trick.

Any of these micro energy generators could be made to work, and together with the ability to design circuits that use 0.1 to 1% of the electricity used by normal circuits, this  should just about eliminate any computational/power limits to the sorts of IoT sensors that could be deployed.

What about non-sensor/non-IoT electronics?

Not sure if this works for IoT sensors why it couldn’t be used for something more substantial like mobile/smart phones, desktop computers, enterprise servers, etc. To that end, it seems that ARM Holdings and IMEC are also looking at the technology.

Only a couple of years ago, everybody was up in arms about all the energy consumption of server farms, especially on the west coast of the USA. But with this sort of sub-threshold transistor electronics coming online, maybe servers could run on ambient radio wave energy, data centers could run desktop computers and led lighting off of thermo-electric generators inside their heat exchangers, and iPhones could run off of accelerometer piezoelectric generators using the motion a phone undergoes while sitting in a pocket of a moving person.

Almost gives the impression of perpetual motion machines but rather than motion we are talking electronics, sort of like perpetual electronics…

So can a no-battery iPhone be in our future, I wouldn’t bet against it. Remember, the compute engine inside all iPhones is based on ARM technology.

Comments?

Photo credit(s): Intel Free Press: Joshua R. Smith holding a sensor