Anti-Gresham’s Law: Good information drives out bad

(Good information is in blue, bad information is in Red)

Read an article the other day in ScienceDaily (Faster way to replace bad info in networks) which discusses research published in a recent IEEE/ACM Transactions on Network journal (behind paywall). Luckily there was a pre-print available (Modeling and analysis of conflicting information propagation in a finite time horizon).

The article discusses information epidemics using the analogy of a virus and its antidote. This is where bad information (the virus) and good information (the antidote) circulate within a network of individuals (systems, friend networks, IOT networks, etc). Such bad information could be malware and its good information counterpart could be a system patch to fix the vulnerability. Another example would be an outright lie about some event and it’s counterpart could be the truth about the event.

The analysis in the paper makes some simplifying assumptions. That in a any single individual (network node), both the virus and the antidote cannot co-exist. That is either an individual (node) is infected by the virus or is cured by the antidote or is yet to be infected or cured.

The network is fully connected and complex. That is once an individual in a network is infected, unless an antidote is developed the infection proceeds to infect all individuals in the network. And once an antidote is created it will cure all individuals in a network over time. Some individuals in the network have more connections to other nodes in the network while different individuals have less connections to other nodes in the network.

The network functions in a bi-directional manner. That is any node, lets say RAY, can infect/cure any node it is connected to and conversely any node it is connected to can infect/cure the RAY node.

Gresham’s law, (see Wikipedia article) is a monetary principle which states bad money in circulation drives out good. Where bad money is money that is worth less than the commodity it is backed with and good money is money that’s worth more than the commodity it is backed with. In essence, good money is hoarded and people will preferentially use bad money.

My anti-Gresham’s law is that good information drives out bad. Where good information is the truth about an event, security patches, antidotes to infections, etc. and bad infrormation is falsehoods, malware, biological viruses., etc

The Susceptible Infected-Cured (SIC) model

The paper describes a SIC model that simulates the (virus and antidote) epidemic propagation process or the process whereby virus and its antidote propagates throughout a network. This assumes that once a network node is infected (at time0), during the next interval (time0+1) it infects it’s nearest neighbors (nodes that are directly connected to it) and they in turn infect their nearest neighbors during the following interval (time0+2), etc, until all nodes are infected. Similarly, once a network node is cured it will cure all it’s neighbor nodes during the next interval and these nodes will cure all of their neighbor nodes during the following interval, etc, until all nodes are cured.

What can the SIC model tell us

The model provides calculations to generate a number of statistics, such as half-life time of bad information and extinction time of bad-information. The paper discusses the SIC model across complex (irregular) network topologies as well as completely connected and star topologies and derives formulas for each type of network

In the discussion portion of the paper, the authors indicate that if you are interested in curing a population with bad information it’s best to map out the networks’ topology and focus your curation efforts on those node(s) that lie along the (most) shortest path(s) within a network.

I wrongly thought that the best way to cure a population of nodes would be to cure the nodes with the highest connectivity. While this may work and such nodes, are no doubt along at least one if not all, shortest paths, it may not be the optimum solution to reduce extinction time, especially If there are other nodes on more shortest paths in a network, target these nodes with a cure.

Applying the SIC model to COVID-19

It seems to me that if we were to model the physical social connectivity of individuals in a population (city, town, state, etc.). And we wanted to infect the highest portion of people in the shortest time we would target shortest path individuals to be infected first.

Conversely, if we wanted to slow down the infection rate of COVID-19, it would be extremely important to reduce the physical connectivity of indivduals on the shortest path in a population. Which is why social distancing, at least when broadly applied, works. It’s also why, when infected, self quarantining is the best policy. But if you wished to not apply social distancing in a broad way, perhaps targeting those individuals on the shortest path to practice social distancing could suffice.

However, there are at least two other approaches to using the SIC model to eradicate (extinguish the disease) the fastest:

  1. Now if we were able to produce an antidote, say a vaccine but one which had the property of being infectious (say a less potent strain of the COVID-19 virus). Then targeting this vaccine to those people on the shortest paths in a network would extinguish the pandemic in the shortest time. Please note, that to my knowledge, any vaccine (course), if successful, will eliminate a disease and provide antibodies for any future infections of that disease. So the time when a person is infected with a vaccine strain, is limited and would likely be much shorter than the time soemone is infected with the original disease. And most vaccines are likely to be a weakened version of an original disease may not be as infectious. So in the wild the vaccine and the original disease would compete to infect people.
  2. Another approach to using the SIC model and is to produce a normal (non-transmissible) vaccine and target vaccination to individuals on the shortest paths in a population network. As once vaccinated, these people would no longer be able to infect others and would block any infections to other individuals down network from them. One problem with this approach is if everyone is already infected. Vaccinating anyone will not slow down future infection rates.

There may be other approaches to using SIC to combat COVID-19 than the above but these seem most reasonable to me.

So, health organizations of the world, figure out your populations physical-social connectivity network (perhaps using mobile phone GPS information) and target any cure/vaccination to those individuals on the highest number of shortest paths through your network.

Comments?

Photo Credit(s):

  1. Figure 2 from the Modeling and analysis of conflicting information propagation in a finite time horizon article pre-print
  2. Figure 3 from the Modeling and analysis of conflicting information propagation in a finite time horizon article pre-print
  3. COVID-19 virus micrograph, from USA CDC.
Earth globe within a locked cage

Breaking IoT security

Read an article the other day (Researchers exploit low entropy of IoT devices to break RSA certificates) about researchers cracking IoT device security and breaking their public key encryption keys. The report focused on PKI and RSA certificates and IoT devices. The article mentioned the research paper describing the attack in more detail.

safe 'n green by Robert S. Donovan (cc) (from flickr)
safe ‘n green by Robert S. Donovan (cc) (from flickr)

RSA certificates publish a public key and the digital signature of the certificate and identify the device that owns the certificate.

What the researchers were able to show was that ~250K keys in IoT device RSA certificates were insecure. They were able to compromise the 250K RSA certificates using a single Microsoft Azure VM and about $3K of computer time.

It turns out that if two RSA certificate public keys share the same factor, it’s much easier to determine the greatest common devisor GCD) of the two public keys than it is to factor any one of them. And once you have the GCD of the two keys, it’s relatively trivial to determine the other factor in a public key. And that’s just what they did.

Public key infrastructure (PKI) encryption depends on asymmetric cryptography using a “public” key to encrypt messages (or to encrypt a one time key to be used in later encryption of messages) and the use of a “private” key to decrypt the message (or keys) and sign digital certificates. There are certificate authorities and a number of other elements used in PKI but the asymmetric cryptography at its heart, rests on the foundation of the difficulty in factoring large numbers but those large numbers need to be random and prime.

True randomness is hard

Just some of the recently donated seeds that are being added to the Reading Food Growing Network seed swap boxes, including some Polish gherkin seeds.

The problem starts with generating truly random numbers in a digital computer. Digital algorithms typically depend on a computer to perform the some set of instructions, in the same way and sequence so as to get the same answer every time we run the algorithm.

But if you want random numbers this predictability of always coming up with the same answer each time results in non-random numbers (or rather random numbers that are the same each time you run the algorithm). So to get around this, most random number generators can make use of a (random) seed which is used as an input to the algorithm to generate random numbers.

However, this seed needs to be a random number. But to create a random number it needs to be generated not with instructions but using something outside the digital computer. One approach noted above is to use a human typing keys to generate a random number to be used as a seed.

The researchers exploited the fact that most IoT devices don’t use a random (enough) seed for their PKI key generation. And they were able to use the GCD trick to figure out the factors to the PKI.

But the lack of true randomness (or entropy) is the real problem. Somehow, these devices need to have a cheap and effective way to generate a random seed. Until this can be found, they will be subject to these sorts of attacks.

… but not impossible to obtain

I remember in times past when tasked to create a public key-private key pair I had to type some random characters. The Public key encryption algorithm used the inter-character time interval of my typing to generate a random seed that was then used to generate the key pair used in the public key. I believe the two keys also need to be prime numbers.

Earth globe within a locked cage

Perhaps a better approach would be to assign them keys from a centralized key distributor. That way the randomness could be controlled by the (key) distributor.

There are other approaches that depend on the sensors available to an IoT device. If the device has a camera or mic, taking raw data from the camera or sound sensor and doing a numerical transform on them may suffice. Strain gauges, liquid levels, temperature, humidity, wind speed, etc. all of these devices have something which senses the world around them and many of these are, at their base, analog sensors. Reading and converting some portion of these analog signals from raw analog to a digital random seed could be very effective way to generate true(r) randomness.

~~~~

The paper has much more information about the attack and their results if your interested. They said that ~50% of the compromised devices were from a large network supplier. Such suppliers probably also have a vast majority of devices deployed. Still it’s troubling, nonetheless.

Until changes are made to IoT devices, they will continue to be insecure. Not as much of a problem when they are read only sensors but when the information they sense is used by robots or other automation to make decisions about actions, then having insecure IoT becomes a safety issue.

This is not the first time such an attack was attempted and each time, it’s been very successful. That alone should be cause for alarm. But IoT and similar devices are hard to patch in the field and their continuing insecurity may be more of a result of the difficulty of updating a large install base than anything else.

Photo Credit(s):

Where should IoT data be processed – part 1

I was at FlashMemorySummit 2019 (FMS2019) this week and there was a lot of talk about computational storage (see our GBoS podcast with Scott Shadley, NGD Systems). There was also a lot of discussion about IoT and the need for data processing done at the edge (or in near-edge computing centers/edge clouds).

At the show, I was talking with Tom Leyden of Excelero and he mentioned there was a real need for some insight on how to determine where IoT data should be processed.

For our discussion let’s assume a multi-layered IoT architecture, with 1000s of sensors at the edge, 100s of near-edge processing/multiplexing stations, and 1 to 3 core data center or cloud regions. Data comes in from the sensors, is sent to near-edge processing/multiplexing and then to the core data center/cloud.

Data size

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

When deciding where to process data one key aspect is the size of the data. Tin GB or TB but given today’s world, can be PB as well. This lone parameter has multiple impacts and can affect many other considerations, such as the cost and time to transfer the data, cost of data storage, amount of time to process the data, etc. All of these sub-factors include the size of the data to be processed.

Data size can be the largest single determinant of where to process the data. If we are talking about GB of data, it could probably be processed anywhere from the sensor edge, to near-edge station, to core. But if we are talking about TB the processing requirements and time go up substantially and are unlikely to be available at the sensor edge, and may not be available at the near-edge station. And PB take this up to a whole other level and may require processing only at the core due to the infrastructure requirements.

Processing criticality

Human or machine safety may depend on quick processing of sensor data, e. g. in a self-driving car or a factory floor, flood guages, etc.. In these cases, some amount of data (sufficient to insure human/machinge safety) needs to be done at the lowest point in the hierarchy, with the processing power to perform this activity.

This could be in the self-driving car or factory automation that controls a mechanism. Similar situations would probably apply for any robots and auto pilots. Anywhere some IoT sensor array was used to control an entity, that could jeopardize the life of human(s) or the safety of machines would need to do safety level processing at the lowest level in the hierarchy.

If processing doesn’t involve safety, then it could potentially be done at the near-edge stations or at the core. .

Processing time and infrastructure requirements

Although we talked about this in data size above, infrastructure requirements must also play a part in where data is processed. Yes sensors are getting more intelligent and the same goes for near-edge stations. But if you’re processing the data multiple times, say for deep learning, it’s probably better to do this where there’s a bunch of GPUs and some way of keeping the data pipeline running efficiently. The same applies to any data analytics that distributes workloads and data across a gaggle of CPU cores, storage devices, network nodes, etc.

There’s also an efficiency component to this. Computational storage is all about how some workloads can better be accomplished at the storage layer. But the concept applies throughout the hierarchy. Given the infrastructure requirements to process the data, there’s probably one place where it makes the most sense to do this. If it takes a 100 CPU cores to process the data in a timely fashion, it’s probably not going to be done at the sensor level.

Data information funnel

We make the assumption that raw data comes in through sensors, and more processed data is sent to higher layers. This would mean at a minimum, some sort of data compression/compaction would need to be done at each layer below the core.

We were at a conference a while back where they talked about updating deep learning neural networks. It’s possible that each near-edge station could perform a mini-deep learning training cycle and share their learning with the core periodicals, which could then send this information back down to the lowest level to be used, (see our Swarm Intelligence @ #HPEDiscover post).

All this means that there’s a minimal level of processing of the data that needs to go on throughout the hierarchy between access point connections.

Pipe availability

binary data flow

The availability of a networking access point may also have some bearing on where data is processed. For example, a self driving car could generate TB of data a day, but access to a high speed, inexpensive data pipe to send that data may be limited to a service bay and/or a garage connection.

So some processing may need to be done between access point connections. This will need to take place at lower levels. That way, there would be no need to send the data while the car is out on the road but rather it could be sent whenever it’s attached to an access point.

Compliance/archive requirements

Any sensor data probably needs to be stored for a long time and as such will need access to a long term archive. Depending on the extent of this data, it may help dictate where processing is done. That is, if all the raw data needs to be held, then maybe the processing of that data can be deferred until it’s already at the core and on it’s way to archive.

However, any safety oriented data processing needs to be done at the lowest level and may need to be reprocessed higher up in the hierachy. This would be done to insure proper safety decisions were made. And needless the say all this data would need to be held.

~~~~

I started this post with 40 or more factors but that was overkill. In the above, I tried to summarize the 6 critical factors which I would use to determine where IoT data should be processed.

My intent is in a part 2 to this post to work through some examples. If there’s anyone example that you feel may be instructive, please let me know.

Also, if there’s other factors that you would use to determine where to process IoT data let me know.

Information flows everywhere – part 1

Read an article today from Scientific American (Sewage is helping cities flush out the opioid crisis) about how using chemical analysis of wastewater can be used to assess the extent of the opioid crisis in their city.

Wastewater information highway

There’s a lab at ASU (Arizona State University) that chemically analyzes samples of wastewater to determine the amount of drugs that a city’s population excretes. They can provide a near real-time assessment of the proportion of drugs in city sewage and thereby, in a city’s population.

The problem with public drug use surveys and hospital data gathering is that they take time.  Moreover, surveys and hospital data gathering typically come long after drugs problem have become a serious problem in a city’s population.

Wastewater sample drug analysis can be done in a matter of days and can be redone as often as needed. Such data could be used to track intervention activities and see if they have a real impact (positive or negative) on drug use in a population.

Neighborhood health

In addition, by sampling sewage at a neighborhood level, one can gain an assessment of drug problems at any sub-division of a city that’s needed.

The above article talks about an MIT program with Cary, NC (from Biobot.io)  that is designing robots to traverse sewer pipes and analyze wastewater chemical makeup in real time, reporting this back to ground stations around the city.

With such an approach, one could almost zero in (depending on sewer pipe networks) on any neighborhood in a city, target specific interventions at that level and measure impact in (digestion delayed) real time. Doing so, cities or states for that matter, could  experiment with different interventions on a neighborhood by neighborhood basis and gain statistical evidence on drug problem intervention effectiveness.

But, you can analyze wastewater for any number of variables, such as viruses, bacteria, enzymes, etc. Any of which can lead to a better understanding of a population’s health.

~~~~

Two things I want to leave you with:

First, public health has had a major impact on human health and has doubled our lifespan in 200 years. All modern cities have water treatment plants today to insure water quality and thereby, have reduced the incidence of cholera and other waterborne epidemics in their cities. Wastewater analysis has the potential for significant improvements in population health monitoring. Just like water treatment, wastewater analysis will someday become common public health practice in modern cities throughout the world.

Second, I was at a conference this week which presented a slide that there was no cold data anymore (Pure//Accelerate 2018). This was in reference to  re-analyzing old, cold data can often lead to insights and process improvements that were not obvious at first glance.

But it’s not just data anymore. Any activity done by man needs to be analyzed for (inherent & invisible) information flows that could be extracted to make the world a better place.

Photo Credit(s):

The fragility of public cloud IT

I have been reading AntiFragile again (by Nassim Taleb). And although he would probably disagree with my use of his concepts, it appears to me that IT is becoming more fragile, not less.

For example, recent outages at major public cloud providers display increased fragility for IT. Yet these problems, although almost national in scope, seldom deter individual organizations from their migration to the cloud.

Tragedy of the cloud commons

The issues are somewhat similar to the tragedy of the commons. When more and more entities use a common pool of resources, occasionally that common pool can become degraded. But because no-one really owns the common resources no one has any incentive to improve the situation.

Now the public cloud, although certainly a common pool of resources, is also most assuredly owned by corporations. So it’s not a true tragedy of the commons problem. Public cloud corporations have a real incentive to improve their services.

However, the fragility of IT in general, the web, and other electronic/data services all increases as they become more and more reliant on public cloud, common infrastructure. And I would propose this general IT fragility is really not owned by any one person, corporation or organization, let alone the public cloud providers.

Pre-cloud was less fragile, post-cloud more so

In the old days of last century, pre-cloud, if a human screwed up a CLI command the worst they could happen was to take out a corporation’s data services. Nowadays, post-cloud, if a similar human screws up a CLI command, the worst that can happen is that major portions of the internet services of a nation go down.

Strange Clouds by michaelroper (cc) (from Flickr)

Yes, over time, public cloud services have become better at not causing outages, but they aren’t going away. And if anything, better public cloud services just encourages more corporations to use them for more data services, causing any subsequent cloud outage to be more impactful, not less

The Internet was originally designed by DARPA to be more resilient to failures, outages and nuclear attack. But by centralizing IT infrastructure onto public cloud common infrastructure, we are reversing the web’s inherent fault tolerance and causing IT to be more susceptible to failures.

What can be done?

There are certainly things that can be done to improve the situation and make IT less fragile in the short and long run:

  1. Use the cloud for non-essential or temporary data services, that don’t hurt a corporation, organization or nation when outages occur.
  2. Build in fault-tolerance, automatic switchover for public cloud data services to other regions/clouds.
  3. Physically partition public cloud infrastructure into more regions and physically separate infrastructure segments within regions, such that any one admin has limited control over an amount of public cloud infrastructure.
  4. Divide an organizations or nations data services across public cloud infrastructures, across as many regions and segments as possible.
  5. Create a National Public IT Safety Board, not unlike the one for transportation, that does a formal post-mortem of every public cloud outage, proposes fixes, and enforces fix compliance.

The National Public IT Safety Board

The National Transportation Safety Board (NTSB) has worked well for air transportation. It relies on the cooperation of multiple equipment vendors, airlines, countries and other parties. It performs formal post mortems on any air transportation failure. It also enforces any fixes in processes, procedures, training and any other activities on equipment vendors, maintenance services, pilots, airlines and other entities that can impact public air transport safety. At the moment, air transport is probably the safest form of transportation available, and much of this is due to the NTSB

We need something similar for public (cloud) IT services. Yes most public cloud companies are doing this sort of work themselves in isolation, but we have a pressing need to accelerate this process across cloud vendors to improve public IT reliability even faster.

The public cloud is here to stay and if anything will become more encompassing, running more and more of the worlds IT. And as IoT, AI and automation becomes more pervasive, data processes that support these services, which will, no doubt run in the cloud, can impact public safety. Just think of what would happen in the future if an outage occurred in a major cloud provider running the backend for self-guided car algorithms during rush hour.

If the public cloud is to remain (at this point almost inevitable) then the safety and continuous functioning of this infrastructure becomes a public concern. As such, having a National Public IT Safety Board seems like the only way to have some entity own IT’s increased fragility due to  public cloud infrastructure consolidation.

~~~~

In the meantime, as corporations, government and other entities contemplate migrating data services to the cloud, they should consider the broader impact they are having on the reliability of public IT. When public cloud outages occur, all organizations suffer from the reduced public perception of IT service reliability.

Photo Credits: Fragile by Bart Everson; Fragile Planet by Dave Ginsberg; Strange Clouds by Michael Roper

Flash’s only at 5% of data storage

7707062406_6508dba2a4_oWe have been hearing for years that NAND flash is at price parity with disk. But at this week’s Flash Memory Summit, Darren Thomas, VP Storage BU, Micron said at his keynote that NAND only store 5% of the bits in a data center.

Darren’s session was all about how to get flash to become more than 5% of data storage and called this “crossing the chasm”. I assume the 5% is against yearly data storage shipped.

Flash’s adoption rate

Darren, said last year flash climbed from 4% to 5% of data center storage, but he made no mention on whether flash’s adoption was accelerating. According to another of Darren’s charts, flash is expected to ship ~77B Gb of storage in 2015 and should grow to about 240B Gb by 2019.

If the ratio of flash bits shipped to data centers (vs. all flash bits shipped) holds constant then Flash should be ~15% of data storage by 2019. But this assumes data storage doesn’t grow. If we assume a 10% Y/Y CAGR for data storage, then flash would represent about ~9% of overall data storage.

Data growth at 10% could be conservative. A 2012 EE Times article said2010-2015 data growth CAGR would be 32%  and IDC’s 2012 digital universe report said that between 2012 and 2020, data will double every two years, a ~44% CAGR. But both numbers could be talking about the world’s data growth, not just data center.

How to cross this chasm?

Geoffrey Moore, author of Crossing the Chasm, came up on stage as Darren discussed what he thought it would take to go beyond early adopters (visionaries) to early majority (pragmatists) and reach wider flash adoption in data center storage. (See Wikipedia article for a summary on Crossing the Chasm.)

As one example of crossing the chasm, Darren talked about the electric light bulb. At introduction it competed against candles, oil lamps, gas lamps, etc. But it was the most expensive lighting system at the time.

But when people realized that electric lights could allow you to do stuff at night and not just go to sleep, adoption took off. At that time competitors to electric bulb did provide lighting it just wasn’t that good and in fact, most people went to bed to sleep at night because the light then available was so poor.

However, the electric bulb  higher performing lighting solution opened up the night to other activities.

What needs to change in NAND flash marketing?

From Darren’s perspective the problem with flash today is that marketing and sales of flash storage are all about speed, feeds and relative pricing against disk storage. But what’s needed is to discuss the disruptive benefits of flash/NAND storage that are impossible to achieve with disk today.

What are the disruptive benefits of NAND/flash storage,  unrealizable with disk today.

  1. Real time analytics and other RT applications;
  2. More responsive mobile and data center applications;
  3. Greener, quieter, and potentially denser data center;
  4. Storage for mobile, IoT and other ruggedized application environments.

Only the first three above apply  to data centers. And none seem as significant  as opening up the night, but maybe I am missing a few.

Also the Wikipedia article cited above states that a Crossing the Chasm approach works best for disruptive or discontinuous innovations and that more continuous innovations (doesn’t cause significant behavioral change) does better with Everett Roger’s standard diffusion of innovation approaches (see Wikepedia article for more).

So is NAND flash a disruptive or continuous innovation?  Darren seems firmly in the disruptive camp today.

Comments?

Photo Credit(s): 20-nanometer NAND flash chip, IntelFreePress’ photostream

Another Y2K-like problem, this time Internet routers are the problem

Read an article today in Wired about The Internet has grown too big for its aging infrastructure showing up as a serious problem that’s soon to be more widespread.

This Y2K-like problem is associated with the Border Gateway Protocol  (BGP) routing tables entries which represent IP address prefixes.  Internet routers keep BGP tables in Tertiary Content Addressable Memory (TCAM, sort of like a virtual memory page table only for router addresses) and there are physical limits as to how many BGP entries will fit into any specific Internet router.  Some routers crash when they exceed their TCAM limit and others just ignore the BGP entries that exceed their limits – neither approach seems workable long term.

Apparently we are approaching one of those hard and fast limits, at least for older routers, as the BGP routing tables reach over 512K entries.  As of May 2014, there were in excess of 500,000 BGP prefixes (table entries).

Smoking gun points to …

It appears that this time Verizon was the perpetrator. Yesterday they added 15K BGP entries to the Internet BGP table, kicking some routers over their 512K limit. This was no doubt in anticipation of some growth in Internet addresses on their networks.

The result was that LiquidWeb’s network went down. Supposedly they have an older Cisco 7600 router and the latest addition to BGP entries exceeded its TCAM capacity, crashing their router. Oops!

Verizon quickly withdrew the offending 15K BGP entry addition and things seem back to normal for the moment. But we are once again close to some arbitrary computerized limit. Only this problem won’t happen at midnight December 31st. It won’t take that long to exceed the current BGP entry limits again and next time it might not be that easy to back out.

But it’s almost like there’s no stopping it…

Just guessing here but these types of routers probably have similar limits for BGP entries exceeding 1024K entries, 2048K, 4096K, etc. With the number of internet connected devices growing exponentially, especially with the Internet of Things, I predict similar problems over the coming years. Indeed, we went from ~400K to ~500K BGP entries in just under two years and the rate of growth seems to be accelerating.

It’s really just a matter of time before even todays routers run out of TCAM slots. Y2K-like, only this time there’s no way to stop it from happening again and again in the future.  I suppose it would be better if the routers just ignored the new BGP entries rather than crashing but that would seem to put some segment of Internet routers out of their reach?  There’s got to be a way to intelligently ignore some updates or summarize prefix updates when a router runs out of TCAM entries.

Welcome to the new 512K problem.

~~~~

Comments?

Photo Credit(s): Cisco 7609 @ itb for INHERENT by Affan Basalamah

 

 

 

Extremely low power transistors open up new IoT applications

We have written before about the computational power efficiency law know as Koomley’s Law which states that the computations one can do with the same amount of energy has been doubling every 1.57 years (for more info, please see my No power sensors surface … post).

The dawn of sub-threshold electronics

But just this week there was another article this time about electronics that use much less power than normal transistors. Achieving this in Internet of Thing (IoT) type sensors would take the computations/joule up by a orders of magnitude, not just ~1.6X as in Koomley’s law, although how long it will take to come out commercially is another issue

This new technology is called sub-threshold transistors and they use much less power than normal transistors. The article in MIT Technical Review, A batteryless sensor chip for the IoT, discusses the phenomenon used by sub-threshold transistors that normal transistors, even when they are technically in the “off” state, leak some amount of current.  This CMOS transistor parasitic leakage had been considered a current drain that couldn’t be eliminated and as such, wasted energy up until recently.

Not so any longer, with the new sub-threshold transistor design paradigm, electronics  could now take advantage of this “leakage” current to perform actual computations. And that opens up a whole new level of IoT sensors that could be deployed.

Prototype sub-threshold circuits coming out

One company PsiKick is using this phenomenon to design ASIC/chips that, depending on the application, using sub-threshold transistors plus extensive power reduction design techniques, only use 0.1 to 1% of the energy of similar functioning chips. Their first prototype was a portable EKG that uses body heat to power itself with a thermo-electric generator rather than a battery.  The prototype was just a proof of concept but they seem to be at work trying to open the technology to broader applications.

One serious consideration limiting the types of sensors that could be deployed in IoT applications was how to get power to these sensors. The other thing was how to get information out of the sensor and out to the real world.  There are a few ways to attack the power issue for IoT sensors, creating more efficient electronics, more effective/long lasting batteries, and smaller electronic generators. Sub-threshold transistor electronics takes a major leap forward to more efficient electronics.

In my previous post we discussed ways to construct smaller electronic generators used by low-power systems/chips. One approach highlighted in that paper used small antennas to extract power from ambient radio waves. But that’s not the only way to generate small amounts of power. I have also heard of piezoelectric generators that use force and movement (such as foot falls) to generate energy. And of course, small solar panels could do the same trick.

Any of these micro energy generators could be made to work, and together with the ability to design circuits that use 0.1 to 1% of the electricity used by normal circuits, this  should just about eliminate any computational/power limits to the sorts of IoT sensors that could be deployed.

What about non-sensor/non-IoT electronics?

Not sure if this works for IoT sensors why it couldn’t be used for something more substantial like mobile/smart phones, desktop computers, enterprise servers, etc. To that end, it seems that ARM Holdings and IMEC are also looking at the technology.

Only a couple of years ago, everybody was up in arms about all the energy consumption of server farms, especially on the west coast of the USA. But with this sort of sub-threshold transistor electronics coming online, maybe servers could run on ambient radio wave energy, data centers could run desktop computers and led lighting off of thermo-electric generators inside their heat exchangers, and iPhones could run off of accelerometer piezoelectric generators using the motion a phone undergoes while sitting in a pocket of a moving person.

Almost gives the impression of perpetual motion machines but rather than motion we are talking electronics, sort of like perpetual electronics…

So can a no-battery iPhone be in our future, I wouldn’t bet against it. Remember, the compute engine inside all iPhones is based on ARM technology.

Comments?

Photo credit(s): Intel Free Press: Joshua R. Smith holding a sensor