Steam Locomotive lessons for disk vs. SSD

Read a PHYS ORG article on Extinction of Steam Locomotives derails assumption about biological evolution… which was reporting on a Royal Society research paper The end of the line: competitive exclusion & the extinction… that looked at the historical record of steam locomotives since their inception in the early 19th century until their demise in the mid 20th century. Reading the article it seems to me to have a wider applicability than just to evolutionary extinction dynamics and in fact similar analysis could reveal some secrets of technological extinction.

Steam locomotives

During its 150 years of production, many competitive technologies emerged starting with electronic locomotives, followed by automobiles & trucks and finally, the diesel locomotive.

The researchers selected a single metric to track the evolution (or fitness) of the steam locomotive called tractive effort (TE) or the weight a steam locomotive could move. Early on, steam locomotives hauled both passengers and freight. The researchers included automobiles and trucks as competitive technologies because they do offer a way to move people and freight. The diesel locomotive was a more obvious competitor.

The dark line is a linear regression trend line on the wavy mean TE line, the boxes are the interquartile (25%-75%) range, the line within the boxes the median TE value, and the shaded areas 95% confidence interval for trend line of the steam locomotives TE that were produced that year. Raw data from Locobase, a steam locomotives database

One can see from the graph three phases. The red phase, from 1829-1881, there was unencumbered growth of TE for steam locomotives during this time. But in 1881, electric locomotives were introduced corresponding to the blue phase and after WW II the black phase led to the demise of steam.

Here (in the blue phase) we see a phenomena often seen with the introduction of competitive technologies, there seems to be an increase in innovation as the multiple technologies duke it out in the ecosystem.

Automobiles and trucks were introduced in 1901 but they don’t seem to impact steam locomotive TE. Possibly this is because the passenger and freight volume hauled by cars and trucks weren’t that significant. Or maybe it’ impact was more on the distances hauled.

In 1925 diesel locomotives were introduced. Again we don’t see an immediate change in trend values but over time this seemed to be the death knell of the steam locomotive.

The researchers identified four aspects to the tracking of inter-species competition:

  • A functional trait within the competitive species can be identified and tracked. For the steam locomotive this was TE,
  • Direct competitors for the specie can be identified that coexist within spatial, temporal and resource requirements. For the steam locomotive, autos/trucks and electronic/diesel locomotives.
  • A complete time series for the species/clade (group of related organisms) can be identified. This was supplied by Locobase
  • Non-competitive factors don’t apply or are irrelevant. There’s plenty here including most of the items listed on their chart.

From locomotives to storage

I’m not saying that disk is akin to steam locomotives while flash is akin to diesel but maybe. For example one could consider storage capacity as similar to locomotive TE. There’s a plethora of other factors that one could track over time but this one factor was relevant at the start and is still relevant today. What we in the industry lack is any true tracking of capacities produced since the birth of the disk drive 1956 (according to wikipedia History of hard disk drives article) and today.

But I’d venture to say the mean capacity have been trending up and the variance in that capacity have been static for years (based on more platter counts rather than anything else).

There are plenty of other factors that could be tracked for example areal density or $/GB.

Here’s a chart, comparing areal (2D) density growth of flash, disk and tape media between 2008 and 2018. Note both this chart and the following charts are Log charts.

Over the last 5 years NAND has gone 3D. Current NAND chips in production have 300+ layers. Disks went 3D back in the 1960s or earlier. And of course tape has always been 3D, as it’s a ribbon wrapped around reels within a cartridge.

So areal density plays a critical role but it’s only 2 of 3 dimensions that determine capacity. The areal density crossover point between HDD and NAND in 2013 seems significant to me and perhaps the history of disk

Here’s another chart showing the history of $/GB of these technologies

In this chart they are comparing price/GB of the various technologies (presumably the most economical available during that year). Trajectories in HDDs between 2008-2010 was on a 40%/year reduction trend in $/GB, then flat lined and now appears to be on a 20%/year reduction trend. Flash during 2008-2017 has been on a 25% reduction in $/GB for that period which flatlined in 2018. LTO Tape had been on a 25%/year reduction from 2008 through 2014 and since then has been on a 11% reduction.

If these $/GB trends continue, a big if, flash will overcome disk in $/GB and tape over time.

But here’s something on just capacity which seems closer to the TE chart for steam locomotives.

HDD capacity 1980-2020.

There’s some dispute regarding this chart as it only reflects drives available for retail and drives with higher capacities were not always available there. Nonetheless it shows a couple of interesting items. Early on up to ~1990 drive capacities were relatively stagnant. From 1995-20010 there was a significant increase in drive capacity and since 2010, drive capacities have seemed to stop increasing as much. We presume the number of x’s for a typical year shows different drive capacities available for retail sales, sort of similar to the box plots on the TE chart above

SSDs were first created in the early 90’s, but the first 1TB SSD came out around 2010. Since then the number of disk drives offered for retail (as depicted by Xs on the chart each year) seem to have declined and their range in capacity (other than ~2016) seem to have declined significantly.

If I take the lessons from the Steam Locomotive to heart here, one would have to say that the HDD has been forced to adapt to a smaller market than they had prior to 2010. And if areal density trends are any indication, it would seem that R&D efforts to increase capacity have declined or we have reached some physical barrier with todays media-head technologies. Although such physical barriers have always been surpassed after new technologies emerged.

What we really need is something akin to the Locobase for disk drives. That would track all disk drives sold during each year and that way we can truly see something similar to the chart tracking TE for steam locomotives. And this would allow us to see if the end of HDD is nigh or not.

Final thoughts on technology Extinction dynamics

The Royal Society research had a lot to say about the dynamics of technology competition. And they had other charts in their report but I found this one very interesting.

This shows an abstract analysis of Steam Locomotive data. They identify 3 zones of technology life. The safe zone where the technology has no direct competitions. The danger zone where competition has emerged but has not conquered all of the technologies niche. And the extinction zone where competing technology has entered every niche that the original technology existed.

In the late 90s, enterprise disk supported high performance/low capacity, medium performance/medium capacity and low performance/high capacity drives. Since then, SSDs have pretty much conquered the high performance/low capacity disk segment. And with the advent of QLC and PLC (4 and 5 bits per cell) using multi-layer NAND chips, SSDs seem poisedl to conquer the low performance/high capacity niche. And there are plenty of SSDs using MLC/TLC (2 or 3 bits per cell) with multi-layer NAND to attack the medium performance/medium capacity disk market.

There were also very small disk drives at one point which seem to have been overtaken by M.2 flash.

On the other hand, just over 95% of all disk and flash storage capacity being produced today is disk capacity. So even though disk is clearly in the extinction zone with respect to flash storage, it’s seems to still be doing well.

It would be wonderful to have a similar analysis done on transistors vs vacuum tubes, jet vs propeller propulsion, CRT vs. LED screens, etc. Maybe at some point with enough studies we could have a theory of technological extinction that can better explain the dynamics impacting the storage and other industries today.

Comments,

Photo Credit(s):

BEHAVIOR, an in-home robot, benchmark

As my readers probably already know, I’m a long time benchmark geek. So when I recently read an article out of Stanford (AI Experts Establish the “North Star” for Domestic Robotics Field) where a research team there developed a new robotic benchmark, I was interested. The new robotics benchmark is called BEHAVIOR which was documented in an ARXIV.org article (see: BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments). It essentially uses real world data to identify domestic work activities that any robot would need to perform in a home.

The problems with robot benchmarks

The problem with benchmarks are multi-faceted:

  • How realistic are the workloads used to evaluate the systems being measured?
  • How accurate are the metrics used to rank and judge benchmark submissions?
  • How costly/complex is it to run a benchmark?
  • How are submissions audited and are they reproducible?.
  • Where are benchmark results reported and are they public?

And of course robotics brings in it’s own issues that makes benchmarking more difficult:

  • What sensors does the robot have to understand how to complete tasks?
  • What manipulators does the robot have to perform the tasks required of it?
  • Do the robots move in the environment and if so, how do the robots move?
  • Does the robot perform the task in the real world on in a simulated environment.

And of course, when using a simulated environment, how realistic is it.

BEHAVIOR with iGibson (see below) seem to answer many of these concerns for an in home robot benchmarking.

What is BEHAVIOR?

First, BEHAVIOR’s home making tasks were selected from an American Time Use Survey maintained by the USA Bureau of Labor Statistics which identifies tasks Americans perform in their homes. With BEHAVIOR 1.0 there are 100 tasks ranging from building a fruit basket to cleaning a toilet, and just about everything in between. I didn’t see any cooking or mixing drinks tasks but maybe those will be added.

Second, BEHAVIOR uses a predicate logic, called BDDL (BEHAVIOR Domain Definition Language) to define initial conditions for tasks such as tables, chairs, books, etc located in the room, where objects need to be placed, and successful completion goals or what task completion should look like.

BEHAVIOR uses 15 different rooms or scenes in their benchmark, such as a kitchen, garage, study, etc. Each of the 100 tasks are performed in a specific room.

BEHAVIOR incorporates 1217 different objects in 391 categories. Once initial conditions are defined for a task, BEHAVIOR essentially randomly selects different object for the task and randomly locates them throughout the room.

In order to run the benchmark, one could conceivably create a real room, with all the objects and have them placed according to BEHAVIOR BDDL’s randomly assigned locations with a robot physically present in the room and have it perform the assigned task OR one could use a simulation engine and have the robot run the task in the simulation environment, with simulated room, objects and robot.

It appears as if BEHAVIOR could operate in any robotics simulation environment but has been currently implemented in Stanford’s open source robotics simulation engine called iGibson 2.0 (see: iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks and iGibson 2.0 website). iGibson uses the Bullet real time physics engine for realistic physical environment simulation.

A robot operating within iGibson is provided a 3D rendering of the room and objects in images or LIDAR sensor scans. It can then identify the objects that it needs to manipulate to perform the tasks. One can define the robot simulated sensors and manipulators in iGibnot 2.0 and it’s written in Python, is open source (GitHub Repo) and can be installed to run on (Ubuntu 16.04) Linux, Windows (10) or Mac (10.15) systems.

Finally, BEHAVIOR uses a set of metrics to determine how well a robot has performed its assigned task. Their first metric is success score defined as the fraction of goal conditions satisfied by the robot performing the task. Such as the number of dishes properly cleaned and placed in the drying rack divided by the total number of dishes for a “washing dishes” task. And their second metric is a set of efficiency metrics, like time to complete a task, sum total of object distance moved during the task, how well objects are arranged at task completion (is the toilet seat down…), etc.

Another feature of iGibson 2.0 is that it offers the ability to record a human (in VR) doing a task in its simulated environment. So if your robotic system is able to learn by example, then iGibson could be used to provide training data for an activity.

~~~~

A couple of additions to the BEHAVIOR benchmark/iGibson simulation environment that I would like to see:

  • There ought to be a way to construct a house/apartment where multiple rooms are arranged in a hierarchy, i.e., rooms associated with floors with connections using hallways, doors, stairs, etc. between them. This way one could conceivably have a define a set of homes/apartments (let’s say 5) that a robot would perform its tasks in.
  • They need a task list to drive robot activities. Assume that there’s some amount of time let’s say 8-12 hours that a robot is active and construct a series of tasks that need to be accomplished during that period.
  • Robots should be placed in the rooms/apartments/homes at random with random orientation and then they would have to navigate through rooms/passageways to the rooms to perform the tasks.
  • They need to add pet/human avatars in the rooms throughout a home. These would represent real time obstacles to task completion/navigation as well as add more tasks associated with caring for pets/humans.
  • They need the ability to add non-home rooms that could encompass factory floors, emergency response debris fields, grocery stores, etc. and their own unique set of tasks for each of these so that it could be used as a benchmark for more than just domestic robots.

Aside from the above additions to BEHAVIOR/iGibson 2.0, there’s the question of the organization that manages the benchmark and submissions. There needs to be a website/place to publish benchmark results for a robot AND a mechanism to audit results for accuracy to insure fair play.

Typically this would be associated with an organization responsible for publishing and auditing submissions as well as guide further development of BEHAVIOR/iGibson 2.0. BEHAVIOR 1.0 is not the end but it’s a great start at providing realistic tasks that any domestic robot would need to perform. 

Benchmarks have always aided the development and assessment of new technologies. Having a in home robot benchmark like BEHAVIOR makes getting domestic robots that do what we want them to do a more likely possibility someday.

There’s a new benchmark in town and it signals the dawning of the domestic robot age.

Photo Credit(s):

Dell EMC PowerStore X and the Edge – TFDxDell

This past summer I attended a virtual TFDxDell event where there was a number of sessions discussing Dell EMC technologies for the enterprise. One session sort of struck a nerve, the Dell EMC PowerStore session and I have finally figured out what interested me most in their talk, their PowerStore X appliances and AppsON technologies

What is AppsON and PowerStore X appliance?

Essentially PowerStore X with AppsON has an onboard ESXi hypervisor which allows customers to run vSphere VMs inside the storage system with direct vVol (I assume) access to PowerStore data storage without having to go out over a (storage) network.

PowerStore X ESXi is a little behind the most recent VMware vSphere releases (at least 30 days) but it’s current enough for most shops. In non-PowerStore X appliances, PowerStoreOS runs as containers but in PowerStore X, PowerStoreOS storage functionality runs as VMs, just like any other VMs running on its ESXi hypervisor.

Moreover, PowerStore X can still service IOs from other non-PowerStore X resident VMs or bare metal applications running in the environment. In this way you get all the data services of an enterprise class storage system, that also run VMs.

With PowerStore OS 2.0 they have added scale out to AppsON. That is any PowerStore X (1000X, 3000X, 5000X or 7000X) appliance, in a PowerStore X cluster, can have their VMs move from one appliance to another using vSphere vMotion. This means that as your PowerStore X storage clusters grow, you can rebalance VM application workloads across the cluster. A PowerStore X cluster can contain up to 4 PowerStore X appliances.

PowerStore’s heritage goes back quite a ways at Dell and EMC. Prior versions of EMC Unity storage and some of its progenitors had the ability to run applications on the storage itself. But by running an ESXi hypervisor on PowerStore X appliances, it takes all this to a whole new level.

Why would anyone want AppsON?

It’s taken me sometime to understand why anyone would want to use AppsON and I have concluded that the edge might be the best environment to deploy it.

Recent VMware enhancements have reduced minimum node configurations for edge environments to 2 servers. It’s unclear to me whether a single PowerStore X appliance with AppsON is one server or two but, for the moment lets assume its just one. This means that a minimum VMware vSphere edge deployment could use 1 PowerStore X and 1 standalone, ESXi server.

In such an environment, customers could run their data intensive VMs directly on the PowerStore X and some of their non-data intensive VMs on the standalone server. But the flexibility exists to vMotion VMs from one to the other as demand dictates.

But does the edge need storage?

Yes, some do. For instance, take 5G. it enables a whole new class of mobile services and many of them can be quite data intensive. 5G is being deployed around the world as mini-data centers in cell towers. Unclear whether these data centers run vSphere but I’m sure VMware is trying their hardest to make that happen. With vSphere running your 5G mini-datacenter, PowerStore X could make a smart addition.

Then there’s all the smart cars, which are creating TBs of sensor data every time they take to the road. You’re probably not going to have a PowerStore appliance in your smart car (at least anytime soon) but they just might have one at the local service station.

And maybe given all the smart devices in your home, smart cars, smart appliances, smart robots, etc., there’s going to be a whole lot of data generated from your smart home. Having something like PowerStore X in your smart home’s mini-data center would offer a place to hold all that data and to do some processing (compressing maybe) before sending it up to the cloud.

~~~~

We have just two more questions for Dell EMC,

  1. Shouldn’t the base PowerStore appliance be called PowerStore K?
  2. Shouldn’t customers be allowed to run their own K8s container apps on their PowerStore K just as easily as running VMs in their PowerStore X?

Legal Disclosure: TechFieldDay and Dell provided gifts to all participants (including me) for the TFDxDell event.

Photo credit(s):

  • From Dell EMC slides presented at TFDxDell event
  • From Dell EMC slides presented at TFDxDell event
  • From Dell EMC slides presented at TFDxDell event

Math’s war on Gerrymandering

Read an article the other day in MIT Technology Review, Mathematicians are deploying algorithms to stop gerrymandering, that discussed how a bunch of mathematicians had created tools (Python and R applications, links below) which can be used to test whether state redistricting maps are fair or not.

The best introduction to what these applications can do is in a 2021 I. E. Block Community Lecture: Jonathan Christopher Mattingly captured on YouTube video.

For those outside the states, US Congress House of Representatives are elected via state districts. The term gerrymandering was coined in 1812 over a district map created for Massachusetts that took the form of a dragon which all but assured that the district would go Democrat-Republican vs. Federalist in the next election, (source: Wikipedia entry on Gerrymandering). This re-districting process is done every 10 years in every stateafter the US Census Bureau releases their decennial census results which determines the number of representatives that each state elects to the US House of Representatives.

Funny thing about gerrymandering, both the Democrats and the Republicans have done this in the past and will most likely do so in the future. Once district maps are approved they typically stay that way until the next census.

Essentially, what the mathematicians have done is create a way togenerate a vast number of district maps, under a specific series of constraints/guidelines and then can use these series of maps to characterize the democrat-republican split of a trial election based on some recent election result.

One can see in the above show the # of Democrats that would have been elected using the data from the 2012 and 2016 election results under four distinct maps vs a histogram representing the distribution of all the maps the mathematician’s systems created. The four specific maps indicated on the histograms are.

  • NC2012- thrown out by the state courts as being unfair
  • NC2016– one that NC legislature came out with also thrown out as being unfair as it’s equivalent to the NC2012 map
  • Bipartisan Judges – one that a group of independent judges came out with, and
  • Remedial=NC2020 – one submitted by the mathematicians which they deemed “fairer”
These North Carolina congressional district maps illustrate how geometry is not a fail-safe indicator of gerrymandering. The NC 2012 map, with its bizarre district boundaries, was deemed by the courts to be a racial gerrymander. The replacement, the NC 2016 map, looks quite different and tame by comparison, but was deemed to be an unconstitutional political gerrymander. Analysis by Duke’s Jonathan Mattingly and his team showed that the 2012 and 2016 maps were politically equivalent in their partisan outcomes. A court-appointed expert drew the NC 2020 map.

The algorithms (in Python, GitHub repo for GerryChain and R, GitHub repo for redist) take as input an election map which is a combined US census blocks (physical groupings of population defined by US Census Bureau) and some prior election results (that provide the democrat-republican votes for a particular election within those census blocks). And using this data and a list of districting constraints, such as, compactness requirements, minimal breaks of counties (state political units), population equivalence, etc. and using these inputs, the apps generate a multitude or ensemble of district maps for a state. FYI, districting constraints differ from state to state.

Once they have this ensemble of district maps that adhere to the states specified districting constraints one can compare any sample districting map to the histogram and see if a specific map is fair or not. “Fairness” means that it would result in the same Democrat-Republican split that occurred from the highest number of ensemble maps.

The latest process by which districting maps can be created is documented in a research article, Recombination: A Family of Markov Chains for Redistricting. But most of the prior generations all seem to use a tree structure together with a markov chain approach. At the leaves of the tree are the census blocks and the tree hierarchy algorithmically represents the different districting hierarchies.

Presumably, a Markov Chain encodes a method to represent the state’s districting constraints. And what the algorithm does is traverse the tree of census blocks, using the markov chain and randomness, to create district maps by splitting a branch of the tree (=district) off somewhere in the hierarchy, above the census blocks.

Doing this randomly, over a number of iterations, provides a group or ensemble of proper districting maps that can be used to build the histogram for a specific election result. (Suggest reading the above research report for more information on how this works).

One can see the effect of different election results on the distribution of election results that would have occurred with the current ensemble of maps. For example, USH12 uses the census block voting results from the North Carolina, US (Congressional) House election of 2012 and the GOV16 uses the results from the North Carolina Governor election of 2016

It’s somewhat surprising that the US Supreme Court has ruled that districting is a states issue and not subject to constitutional oversight. Not sure I agree but I’m no constitutional scholar/lawyer. So, all of the legal disputes surrounding state’s re-districting maps have been accomplished in state courts.

But what the mathematicians have done is provide the tools needed to create a multitude of districting maps and when one uses prior election results at the census block level, one can see whether any new re-districting map is representative of what one would see if one drew 100s or 1000s of proper redistricting maps.

Let’s hope this all leads to fairer state and federal elections in the future.

Comments?

Photo Credits:

Towards a better AGI – part 3(ish)

Read an article this past week in Nature about the need for Cooperative AI (Cooperative AI: machines must learn to find common ground) which supplies the best view I’ve seen as to a direction research needs to go to develop a more beneficial and benign AI-AGI.

Not sure why, but this past month or so, I’ve been on an AGI fueled frenzy (at leastihere). I didn’t realize this was going to be a multi-part journey otherwise, I would have lableled them AGI part-1 & -2 ( please see: Existential event risks [part-0], NVIDIA Triton GMI, a step to far [part-1] and The Myth of AGI [part-2] to learn more).

But first please take our new poll:

The Nature article puts into perspective what we all want from future AI (or AGI). That is,

  • AI-AI cooperation: AI systems that cooperate with one another while at the same time understand that not all activities are zero sum competitions (like chess, go, Atari games) but rather most activities, within the human sphere, are cooperative activities where one agent has a set of goals and a different agent has another set of goals, some of which overlap while others are in conflict. Sport games like soccer lacrosse come to mind. But there are other card and (Risk & Diplomacy) board games that use cooperating parties, with diverse goals to achieve common ends.
  • AI-Human cooperation: AI systems that cooperate with humans to achieve common goals. Here too, most humans have their own sets of goals, some of which may be in conflict with the AI systems goals. However, all humans have a shared set of goals, preservation of life comes to mind. It’s in this arena where the challenges are most acute for AI systems. Divining human and their own system underlying goals and motivations is not simple. And of course giving priority to the “right” goals when they compete or are in conflict will be an increasingly difficult task to accomplish, given todays human diversity.
  • Human-Human cooperation: Here it gets pretty interesting, but the paper seems to say that any future AI system should be designed to enhance human-human interaction, not deter or interfere with it. One can see the challenge of disinformation today and how wonderful it would be to have some AI agent that could filter all this and present a proper picture of our world. But, humans have different goals and trying to figure out what they are and which are common and thereby something to be enhanced will be an ongoing challenge.

The problem with today’s AI research is that its all about improving specific activities (image recognition, language understanding, recommendation engines, etc) but all are point solutions and none (if any) are focused on cooperation.

Tit for tat wins the award

To that end, the authors of the paper call for a new direction one that attempts to imbue AI systems with social intelligence and cooperative intelligence to work well in the broader, human dominated world that lies ahead.

In the Nature article they mentioned a 1984 book by Richard Axelrod, The Evolution of Cooperation. Perhaps, the last great research on cooperation that was ever produced.

In this book it talked about a world full of simulated prisoner dilemma actors that interacted, one with another, at random.

The experimenters programmed some agents to always do the proper thing for their current partner, some to always do the wrong thing to their partner, others to do right once than wrong from that point forward, etc. The experimenters tried every sort of cooperation policy they could think of.

Each agent in an interaction would get some number of points for an interaction. For example, if both did the right thing they would each get 3 points, if one did wrong, the sucker would get 1 and the bad actor would get 4, both did wrong each got 1 point, etc.

The agents that had the best score during a run (of 1000s of random pairings/interactions) would multiply for the the next run and the agents that did worse would disappear over time in the population of agents in simulated worlds.

The optimal strategy that emerged from these experiments was

  1. Do the right thing once with every new partner, and
  2. From that point forward tit for tat (if the other party did right the last time, then you do right thing the next time you interact with them, if they did wrong the last time, then you do wrong the next time you interact with them).

It was mind boggling at the time to realize that such a simple strategy could be so effective/sustainable in simulation and perhaps in the real world. It turns out that in a (simulated) world of bad agents, there would be this group of Tit for Tat agents that would build up, defend itself and expand over time to succeed.

That was the state of the art in cooperation research back then (1984). I’ve not seen anything similar to this since.

I haven’t seen anything like this that discusses how to implement algorithms in support of social intelligence.

~~~~

The authors of the Nature article believe it’s once again time to start researching cooperation techniques and start researching social intelligence so we can instill proper cooperation and social intelligence technology into future AI (AGI) systems .

Perhaps if we can do this, we may create a better AI (or AGI) so that both it and we can live better in our world, galaxy and universe.

Comments?

The birth of biocomputing (on paper)

Read an article this past week discussing how researchers in Barcelona Spain have constructed a biological computing device on paper (see Biocomputer built with cells printed on paper). Their research was written up in a Nature Article (see 2D printed multi-cellular devices performing digital or analog computations).

We’ve written about DNA computing and storage before (see DNA IT …, DNA Computing… posts and our GBoS podcast on DNA storage…). But this technology takes all that to another level.

2-bit_ALU (from wikimedia.org)
2-bit_ALU (from wikimedia.org)

The challenges with biological computing previously had been how to perform the input processing and output within a single cell or when using multiple cells for computations, how to wire the cells together to provide the combinational logic required for the circuit.

The researchers in Spain seemed to have solved the wiring problems by using diffusion across a porous surface (like paper) to create a carrier signal (wire equivalent) and having cell groups at different locations along this diffusion path either enhance or block that diffusion, amplify/reduce that diffusion or transform that diffusion into something different

Analog (combinatorial circuitry types of) computation for this biocomputer are performed based on the location of sets of cells along this carrier signal. So spatial positioning is central to the device and the computation it performs. Not unlike digital or combinatorial circuitry, different computations can be performed just by altering the position along the wire (carrier signal) that gates (cells) are placed.

Their process seems to start with designing multiple cell groups to provide the processing desired, i.e., enhancing, blocking, transforming of the diffusion along the carrier signal, etc. Once they have the cells required to transform the diffusion process along the carrier signal, they then determine the spatial layout for the cells to be used in the logical circuit to perform the computation desired. Then they create a stamp which has wells (or indentations) which can be filled in with the cells required for the computation. Then they fill these wells with cells and nutrients for their operation and then stamp the circuit onto a porous surface.

The carrier signal the research team uses is a small molecule, the bacterial 3OC6HSL acyl homoserine lactone (AHL) which seems to be naturally used in a sort of biologic quorum sensing. And the computational cells produce an enzyme that enhances or degrades the AHL flow along the carrier signal. The AHL diffuses across the paper and encounters these computational cells along the way and compute whatever it is that’s required to be computed. At some point a cell transforms AHL levels to something externally available

They created:

  • Source cells (Sn) that take a substance as input (say mercury) and converts this into AHL
  • .Gate cells (M) that provide a switch on the solution of AHL difusing across the substrate.
  • Carrier reporter cells (CR) which can be used to report on concentrations of AHL.

The CR cells produce green florescent reporter proteins (GFP). Moreover, each gate cell expresses red florescent reporter proteins (RFP) as well for sort of a diagnostic tap into its individual activity.

Mapping of a general transistor architecture on a cellular printed pattern obtained using a stamping template. Similar to the transistor architecture, the cellular pattern is composed of three main components: source (S1 cells), gate (M cells) that responds to external inputs and a drain (CR cells) as the final output responding to the presence of the carrying signal (CS). b Stamping template used to create the circuit made of PLA with a layer of synthetic fibre (green). Cellular inks (yellow) are in their corresponding containers. Before stamping, the synthetic fibre is soaked with the different cell types. Finally, the stamping template is pressed against the paper surface, depositing all cells. c Circuit response. In the absence of external input, i.e. arabinose, the CS encoded in the production of AHL molecules by S1 cells diffuses along the surface, inducing GFP expression in reporter cells CR. In the presence of 10−3 M arabinose (Ara), the modulatory element Mara produces the AHL cleaving enzyme Aiia, which degrades the CS. Error bars are the standard deviation (SD) of three independent experiments. Data are presented as mean values ± SD. Experiments are performed on paper strips. The average fold change is 5.6x. d Photography of the device. Source data are provided as a Source Data file.

Using S, M and CR cells they are able to create any type of gate needed. This includes OR, AND, NOR and XNOR gates and just about any truth table needed. With this level of logic they could potentially implement any analog circuit on a piece of paper (if it was big enough).

a Schematic representation of the multi-branch implementation of a truth table. bImplementation of different logic gates. A schematic representation of the cells used in each paper strip and their corresponding distance points is given (Left). Gates with two sources of S1 (OR and XNOR gates) are circuits carrying two branches, while the other gates (NOR and AND gates) can be implemented with just one branch. Input concentrations are Ara = 10−3 M and aTc = 10−6 M. M+aTc and MaTc are, respectively, positive and negative modulatory cells responding to aTc. M+ara and Mara are, respectively, positive and negative modulatory cells responding to arabinose. S1 cells produce AHL constitutively and CR are the reporter cells. Error bars are the standard deviation (SD) of three independent experiments. The average fold change has been obtained from the mean of ON and OFF states from each circuit. OR gate 14.31x, AND gate 6.21x, NOR gate 6.58x, XNOR gate 5.6x. Source data are provided as a Source Data file.

As we learn in circuits class, any digital logic can be reduced to one of a few gates, such as NAND or NOR.

As an example of uses of the biocomputing, they implemented a mercury level sensing device. Once the device is dipped in a solution with mercury, the device will display a number of green florescent dots indicating the mercury levels of the solution

The bio-logical computer can be stamped onto any surface that supports agent diffusion, even flexible surfaces such as paper. The process can create a single use bio-logic computer, sort of smart litmus paper that could be used once and then recycled.

The computational cells stay “alive” during operation by metabolizing nutrients they were stamped with. As the biocomputer uses biological cells and paper (or any flexible diffusible substrate) as variable inputs and cells can be reproduced ad-infinitum for almost no cost, biocomputers like this can be made very inexpensively and once designed (and the input cells and stamp created) they can be manufactured like a printing press churns out magazines.

~~~~

Now I’d like to see some sort of biological clock capability that could be used to transform this combinatorial logic into digital logic. And then combine all this with DNA based storage and I think we have all the parts needed for a biological, ARM/RISC V/POWER/X86 based server.

And a capacitor would be a nice addition, then maybe they could design a DRAM device.

Its one off nature, or single use will be a problem. But maybe we can figure out a way to feed all the S, M, and CR cells that make up all the gates (and storage) for the device. Sort of supplying biological power (food) to the device so that it could perform computations continuously.

Ok, maybe it will be glacially slow (as diffusion takes time). We could potentially speed it up by optimizing the diffusion/enzymatic processes. But it will never be the speed of modern computers.

However, it can be made very cheap, and very height dense. Just imagine a stack of these devices 40in tall that would potentially consist of 4000-8000 or more processing elements with immense amounts of storage. And slowness may not be as much of a problem.

Now if we could just figure out how to plug it into an ethernet network, then we’d have something.

Photo credit(s):

  • 2 Bit alu from Wikipedia
  • Figures 1 & 3 from Nature article 2D printed multi-cellular devices performing digital and analog computation

Ok, maybe neuromorphic chips aren’t a deadend

Those of you who followe my blog will no doubt recall that I pronounced neuromorphic chips dead (see our Are neuromorphic chips a deadend blog post). Not because the hardware technology wasn’t improving or good enough, but because software support for the technology was sorely lacking and it was extremely complex or nigh impossible to program and use.

But first please take our new poll:

And, in the meantime GPUs, TPUs and other more “normal” neural network hardware and accelerators, all were able to utilize standard, easy to use, mostly open source, AI DL frameworks. And all this hardware was steadily improving, coming out regularly with more power and performance, with no end in sight.

But then I attended AIFD1 (AI Field Day 1) and at one of the sessions, Anil Mankar, COO & Co-Founder of a company named BrainChip Inc, (see video of their talk) presented yet another neuromorphic chip, called the AKIDA Neural Processor. Their current generation of the technology is available in their AKD 1000 SoC chip, focused on IoT solutions. But they had created a a software development environment that allowed one to use standard TensorFlow neural network trained models and deploy these on their hardware. And that got my interest.

BrainChip’s AKIDA AKD 1000 hardware AND software

Their AI DL nueromoryhic chip is made app of Event Domain Neural Processing Units (NPUs). AKIDA technology is focused on low power, sensor like applications. They claim to save power by only consumuing power (or is running) when an event takes place. They are also able to save on memory requirements by using 1, 2 or 4 bits (vs. 8, 16, 32 or more bits) for model weights/activations

Their hardware seems to run spiking neural networks (SNN, see our blog post on another chip technology using SNNs). In their SDK, they have a CNN2SNN tool that could take a any (TensorFlow) trained CNN model and convert it to a SNN, that could then run on their AKIDA tecnology.

They also have an AKIDA Model Zoo with a handful of pre-trained CNN type models that have already been converted to run on their technology. They also provide a tutorial on their technology. Mankar, said that if you understand how to use TensorFlow Keras today, to construct and train your models, it shouldn’t be too hard to understand how to use their tools to do what you want.

Their chip hardware is available today on a separate PCIe card, M.2 form factor card. or as a chip. Finally, they also license their AKIDA IP to other chip designers.

AKIDA AKD 1000 performance

At the AIFD1 Mankar showed statistics on the performance and accuracy attained using their chip vs. using standard 32 bit floating point CNN implementations.

As discussed above, their processor uses 1-4 bits for weight quantization and as such loses some accuracy but as you can see it’s a matter of one to a few percent vs. these same models using a 32bit floating point CNN implementation.

Because of their smaller weights, AKIDA uses less memory and less bandwidth to update models vs. models using larger weights.

As shown in the chart the the memory required for the 8-bit deep learning algorithms (DLAs) were all significantly larger than the memory requirements for the AKIDA solution. For one algorithm, they required ~1/2 the memory size of the 8-bit DLA version of the model.

Mankar also provided information on the amount of calculations required per inference using AKIDA vs. 8-bit DLAs.

Just to set the stage, MMACs/Inference is (matrix or multiple) multiplications and accumulations required to perform a single inference with the selected CNN model. ImageNet (1000), ImageNette (20) and Visual Wake Word models are all standard CNN models, that have pre-trained on vast repositories of data, that can run in many hardware environments. The non-AKIDA solutions above were all running using an 8-bit DLA CNN model. Activity regularization is a method of reducing the learning rate and weights used during training that shrinks the weight changes during training to reduce model overfit.

He also showed some comparisons of their technology vs. Intel’s LoiHi hardware. LoiHi is another neuromorphic chip, whose original introduction prompted me to write the “Are neuromorphic chips a deadend” post (link above). Unfortunately, I didn’t capture any of these charts, but from my recollection, they showed that AKIDA technology used slightly less power than LoiHi technology in all their comparisons.

AKIDA technology demo

In their live, on camera, demo, they used a previously downloaded VGG16 (if I recall correctly) CNN trained model. Offline they had replaced the last classification layer with a (blank, untrained) dense network and they converted this to a SNN and downloaded onto one of their boards. They had developed an application that used this board with a camera to perform more CNN training or CNN image inferencing (classification).

They first (one-shot) trained their board’s model to recognize the background of what the camera was seeing and then proceeded to perform (one-shot) trainings to classify toys of tigers, elephants and cars. All these were completed in real time in the demo. They were able to verify the training took using pictures of tigers, elephants and cars as well as classify all the toys in different orientations and a different toy car

The AIFD1 (a tuff) crowd, said had seen all this before but would be really interested to see if their chip could distinguish between different cars (one a toy race car and the other a toy police car). On camera, they were able to re-train their CNN to distinguish between (toy) car 1 and car 2 to classify properly between the two of them. They had one or two instances where their CNN model was confused, but they were able to re-train it to recognize the toy car and place it into the correct classification (using two-shot[?] learning).

At AIFD1, Mankar also presented detailed, real world data on how they were able to perform Keyword spotting, person detection, E-nose classification, E-tongue classification, and auditory (E-ear?) classification in embedded sensor systems.

AKIDA technology limitations

At the moment, their chip doesn’t support neural networks that use memory such as LSTM or RNN’s but it seems to work fine for any CNN, which was shown multiple times in the data they presented and in their demo.

We were really impressed with their software stack, liked what we saw of their hardware/IP, and enjoyed their demo and its one-shot learning. Check out their videos (link above) for more information on them.

Photo Credit(s): all charts are from BrainChip Inc’s website or were presented at their AIFD1 session

Undersea datacenter in our future?

Read about Microsoft’s Project Natick Phase 2 this past week. Microsoft submerged a steel encased tube filled with servers, storage and compute for 2 years in the UK and just took it out of the water this past July. We’ve written before about underwater and in space data centers (see our IT in space post)

Project Natick’s Phase 2 underwater data center had 12 racks with 864 servers and 27.5PB of disk storage and was connected to the nearby Orkney island’s power grid (250Kw) and networking infrastructure. The Orkney’s islands are located off the NE coast of the Scotland and its power grid is 100% renewable, using tidal, solar and wind power. During the data center test, Orkney was able was able to power the data center, the islands and still provide power back to the Scottish power grid.

More reliable underwater

According to early reports, the servers in the underwater data center had 1/8th the failures that a control data center, on land, had. Microsoft attributes the enhanced server reliability to the use of a 100% Nitrogen (at 1 atmosphere pressure) rather than normal air and the lack of any humans to jostle the equipment/disturb the environment.

It’s also likely that the temperature variability present in a normal, on the surface of the earth, data center was measurably less than for a data center on the sea floor. If this were true, that could also help explain its better reliability.

Why underwater?

It’s all about cooling modern servers (and storage). According to NREL ( USA National Renewable Energy Lab), most data centers operate at 1.8 PUE (power use efficiency) that is, using 180% of the power required for the servers, storage and networking equipment. The other 80% is used mainly for cooling electronics, but also includes lighting, HVAC, and other essential services for humans. NREL says that high efficiency data centers can achieve a PUE of 1.2.

PUE for Project Natick Phase 2 data center was reported to be 1.07. The only additional electricity needed would probably be power for cooling.

Cooling for the Project Natick Phase 2 data center used seawater pumped through the back of server racks. The data center was placed on the seafloor at 35m (117ft) deep.

It kind looked like a submarine. According to Microsoft, the data center was contracted for, built and deployed in under 90 days. The intent was to have the data center be smaller than a standard ISO shipping container. The data center was driven ontop of an 18 wheeler, from where it was built to the Orkney Island, including ferry crossings. It was placed on a triangular support, towed out to see and deposited on the seafloor.

While 864 servers and 27.5PB of storage seem like a lot to most of us, for Microsoft Azure it’s too small to be used as a regional zone. But for (large) edge deployments. something this size or (10X) smaller might be just the thing.

Microsoft notes that 1/2 the world’s population lives within 200km (120mi) of the ocean. So there’s a ready supply of people and businesses that could take advantage of any underwater data center.

And of course, such a structure when laid on the bottom of the ocean floor, could create an artificial reef (if left in place long enough). Artificial reefs have been made out of ocean oil rigs, sunken war ships and large chunks of steel/concrete. So a underwater data center could do so just as well. And maybe the heating coming from the data center cooling pumps would foster even more coral life.

Microsoft plans Project Natick Phase 3 to be a full Azure AZ that will be deployed underwater which will include about 12 Phase 2 datacenter pressurized units.

Photo Credits: