The birth of biocomputing (on paper)

Read an article this past week discussing how researchers in Barcelona Spain have constructed a biological computing device on paper (see Biocomputer built with cells printed on paper). Their research was written up in a Nature Article (see 2D printed multi-cellular devices performing digital or analog computations).

We’ve written about DNA computing and storage before (see DNA IT …, DNA Computing… posts and our GBoS podcast on DNA storage…). But this technology takes all that to another level.

2-bit_ALU (from wikimedia.org)
2-bit_ALU (from wikimedia.org)

The challenges with biological computing previously had been how to perform the input processing and output within a single cell or when using multiple cells for computations, how to wire the cells together to provide the combinational logic required for the circuit.

The researchers in Spain seemed to have solved the wiring problems by using diffusion across a porous surface (like paper) to create a carrier signal (wire equivalent) and having cell groups at different locations along this diffusion path either enhance or block that diffusion, amplify/reduce that diffusion or transform that diffusion into something different

Analog (combinatorial circuitry types of) computation for this biocomputer are performed based on the location of sets of cells along this carrier signal. So spatial positioning is central to the device and the computation it performs. Not unlike digital or combinatorial circuitry, different computations can be performed just by altering the position along the wire (carrier signal) that gates (cells) are placed.

Their process seems to start with designing multiple cell groups to provide the processing desired, i.e., enhancing, blocking, transforming of the diffusion along the carrier signal, etc. Once they have the cells required to transform the diffusion process along the carrier signal, they then determine the spatial layout for the cells to be used in the logical circuit to perform the computation desired. Then they create a stamp which has wells (or indentations) which can be filled in with the cells required for the computation. Then they fill these wells with cells and nutrients for their operation and then stamp the circuit onto a porous surface.

The carrier signal the research team uses is a small molecule, the bacterial 3OC6HSL acyl homoserine lactone (AHL) which seems to be naturally used in a sort of biologic quorum sensing. And the computational cells produce an enzyme that enhances or degrades the AHL flow along the carrier signal. The AHL diffuses across the paper and encounters these computational cells along the way and compute whatever it is that’s required to be computed. At some point a cell transforms AHL levels to something externally available

They created:

  • Source cells (Sn) that take a substance as input (say mercury) and converts this into AHL
  • .Gate cells (M) that provide a switch on the solution of AHL difusing across the substrate.
  • Carrier reporter cells (CR) which can be used to report on concentrations of AHL.

The CR cells produce green florescent reporter proteins (GFP). Moreover, each gate cell expresses red florescent reporter proteins (RFP) as well for sort of a diagnostic tap into its individual activity.

Mapping of a general transistor architecture on a cellular printed pattern obtained using a stamping template. Similar to the transistor architecture, the cellular pattern is composed of three main components: source (S1 cells), gate (M cells) that responds to external inputs and a drain (CR cells) as the final output responding to the presence of the carrying signal (CS). b Stamping template used to create the circuit made of PLA with a layer of synthetic fibre (green). Cellular inks (yellow) are in their corresponding containers. Before stamping, the synthetic fibre is soaked with the different cell types. Finally, the stamping template is pressed against the paper surface, depositing all cells. c Circuit response. In the absence of external input, i.e. arabinose, the CS encoded in the production of AHL molecules by S1 cells diffuses along the surface, inducing GFP expression in reporter cells CR. In the presence of 10−3 M arabinose (Ara), the modulatory element Mara produces the AHL cleaving enzyme Aiia, which degrades the CS. Error bars are the standard deviation (SD) of three independent experiments. Data are presented as mean values ± SD. Experiments are performed on paper strips. The average fold change is 5.6x. d Photography of the device. Source data are provided as a Source Data file.

Using S, M and CR cells they are able to create any type of gate needed. This includes OR, AND, NOR and XNOR gates and just about any truth table needed. With this level of logic they could potentially implement any analog circuit on a piece of paper (if it was big enough).

a Schematic representation of the multi-branch implementation of a truth table. bImplementation of different logic gates. A schematic representation of the cells used in each paper strip and their corresponding distance points is given (Left). Gates with two sources of S1 (OR and XNOR gates) are circuits carrying two branches, while the other gates (NOR and AND gates) can be implemented with just one branch. Input concentrations are Ara = 10−3 M and aTc = 10−6 M. M+aTc and MaTc are, respectively, positive and negative modulatory cells responding to aTc. M+ara and Mara are, respectively, positive and negative modulatory cells responding to arabinose. S1 cells produce AHL constitutively and CR are the reporter cells. Error bars are the standard deviation (SD) of three independent experiments. The average fold change has been obtained from the mean of ON and OFF states from each circuit. OR gate 14.31x, AND gate 6.21x, NOR gate 6.58x, XNOR gate 5.6x. Source data are provided as a Source Data file.

As we learn in circuits class, any digital logic can be reduced to one of a few gates, such as NAND or NOR.

As an example of uses of the biocomputing, they implemented a mercury level sensing device. Once the device is dipped in a solution with mercury, the device will display a number of green florescent dots indicating the mercury levels of the solution

The bio-logical computer can be stamped onto any surface that supports agent diffusion, even flexible surfaces such as paper. The process can create a single use bio-logic computer, sort of smart litmus paper that could be used once and then recycled.

The computational cells stay “alive” during operation by metabolizing nutrients they were stamped with. As the biocomputer uses biological cells and paper (or any flexible diffusible substrate) as variable inputs and cells can be reproduced ad-infinitum for almost no cost, biocomputers like this can be made very inexpensively and once designed (and the input cells and stamp created) they can be manufactured like a printing press churns out magazines.

~~~~

Now I’d like to see some sort of biological clock capability that could be used to transform this combinatorial logic into digital logic. And then combine all this with DNA based storage and I think we have all the parts needed for a biological, ARM/RISC V/POWER/X86 based server.

And a capacitor would be a nice addition, then maybe they could design a DRAM device.

Its one off nature, or single use will be a problem. But maybe we can figure out a way to feed all the S, M, and CR cells that make up all the gates (and storage) for the device. Sort of supplying biological power (food) to the device so that it could perform computations continuously.

Ok, maybe it will be glacially slow (as diffusion takes time). We could potentially speed it up by optimizing the diffusion/enzymatic processes. But it will never be the speed of modern computers.

However, it can be made very cheap, and very height dense. Just imagine a stack of these devices 40in tall that would potentially consist of 4000-8000 or more processing elements with immense amounts of storage. And slowness may not be as much of a problem.

Now if we could just figure out how to plug it into an ethernet network, then we’d have something.

Photo credit(s):

  • 2 Bit alu from Wikipedia
  • Figures 1 & 3 from Nature article 2D printed multi-cellular devices performing digital and analog computation

Ok, maybe neuromorphic chips aren’t a deadend

Those of you who followe my blog will no doubt recall that I pronounced neuromorphic chips dead (see our Are neuromorphic chips a deadend blog post). Not because the hardware technology wasn’t improving or good enough, but because software support for the technology was sorely lacking and it was extremely complex or nigh impossible to program and use.

But first please take our new poll:

And, in the meantime GPUs, TPUs and other more “normal” neural network hardware and accelerators, all were able to utilize standard, easy to use, mostly open source, AI DL frameworks. And all this hardware was steadily improving, coming out regularly with more power and performance, with no end in sight.

But then I attended AIFD1 (AI Field Day 1) and at one of the sessions, Anil Mankar, COO & Co-Founder of a company named BrainChip Inc, (see video of their talk) presented yet another neuromorphic chip, called the AKIDA Neural Processor. Their current generation of the technology is available in their AKD 1000 SoC chip, focused on IoT solutions. But they had created a a software development environment that allowed one to use standard TensorFlow neural network trained models and deploy these on their hardware. And that got my interest.

BrainChip’s AKIDA AKD 1000 hardware AND software

Their AI DL nueromoryhic chip is made app of Event Domain Neural Processing Units (NPUs). AKIDA technology is focused on low power, sensor like applications. They claim to save power by only consumuing power (or is running) when an event takes place. They are also able to save on memory requirements by using 1, 2 or 4 bits (vs. 8, 16, 32 or more bits) for model weights/activations

Their hardware seems to run spiking neural networks (SNN, see our blog post on another chip technology using SNNs). In their SDK, they have a CNN2SNN tool that could take a any (TensorFlow) trained CNN model and convert it to a SNN, that could then run on their AKIDA tecnology.

They also have an AKIDA Model Zoo with a handful of pre-trained CNN type models that have already been converted to run on their technology. They also provide a tutorial on their technology. Mankar, said that if you understand how to use TensorFlow Keras today, to construct and train your models, it shouldn’t be too hard to understand how to use their tools to do what you want.

Their chip hardware is available today on a separate PCIe card, M.2 form factor card. or as a chip. Finally, they also license their AKIDA IP to other chip designers.

AKIDA AKD 1000 performance

At the AIFD1 Mankar showed statistics on the performance and accuracy attained using their chip vs. using standard 32 bit floating point CNN implementations.

As discussed above, their processor uses 1-4 bits for weight quantization and as such loses some accuracy but as you can see it’s a matter of one to a few percent vs. these same models using a 32bit floating point CNN implementation.

Because of their smaller weights, AKIDA uses less memory and less bandwidth to update models vs. models using larger weights.

As shown in the chart the the memory required for the 8-bit deep learning algorithms (DLAs) were all significantly larger than the memory requirements for the AKIDA solution. For one algorithm, they required ~1/2 the memory size of the 8-bit DLA version of the model.

Mankar also provided information on the amount of calculations required per inference using AKIDA vs. 8-bit DLAs.

Just to set the stage, MMACs/Inference is (matrix or multiple) multiplications and accumulations required to perform a single inference with the selected CNN model. ImageNet (1000), ImageNette (20) and Visual Wake Word models are all standard CNN models, that have pre-trained on vast repositories of data, that can run in many hardware environments. The non-AKIDA solutions above were all running using an 8-bit DLA CNN model. Activity regularization is a method of reducing the learning rate and weights used during training that shrinks the weight changes during training to reduce model overfit.

He also showed some comparisons of their technology vs. Intel’s LoiHi hardware. LoiHi is another neuromorphic chip, whose original introduction prompted me to write the “Are neuromorphic chips a deadend” post (link above). Unfortunately, I didn’t capture any of these charts, but from my recollection, they showed that AKIDA technology used slightly less power than LoiHi technology in all their comparisons.

AKIDA technology demo

In their live, on camera, demo, they used a previously downloaded VGG16 (if I recall correctly) CNN trained model. Offline they had replaced the last classification layer with a (blank, untrained) dense network and they converted this to a SNN and downloaded onto one of their boards. They had developed an application that used this board with a camera to perform more CNN training or CNN image inferencing (classification).

They first (one-shot) trained their board’s model to recognize the background of what the camera was seeing and then proceeded to perform (one-shot) trainings to classify toys of tigers, elephants and cars. All these were completed in real time in the demo. They were able to verify the training took using pictures of tigers, elephants and cars as well as classify all the toys in different orientations and a different toy car

The AIFD1 (a tuff) crowd, said had seen all this before but would be really interested to see if their chip could distinguish between different cars (one a toy race car and the other a toy police car). On camera, they were able to re-train their CNN to distinguish between (toy) car 1 and car 2 to classify properly between the two of them. They had one or two instances where their CNN model was confused, but they were able to re-train it to recognize the toy car and place it into the correct classification (using two-shot[?] learning).

At AIFD1, Mankar also presented detailed, real world data on how they were able to perform Keyword spotting, person detection, E-nose classification, E-tongue classification, and auditory (E-ear?) classification in embedded sensor systems.

AKIDA technology limitations

At the moment, their chip doesn’t support neural networks that use memory such as LSTM or RNN’s but it seems to work fine for any CNN, which was shown multiple times in the data they presented and in their demo.

We were really impressed with their software stack, liked what we saw of their hardware/IP, and enjoyed their demo and its one-shot learning. Check out their videos (link above) for more information on them.

Photo Credit(s): all charts are from BrainChip Inc’s website or were presented at their AIFD1 session

Undersea datacenter in our future?

Read about Microsoft’s Project Natick Phase 2 this past week. Microsoft submerged a steel encased tube filled with servers, storage and compute for 2 years in the UK and just took it out of the water this past July. We’ve written before about underwater and in space data centers (see our IT in space post)

Project Natick’s Phase 2 underwater data center had 12 racks with 864 servers and 27.5PB of disk storage and was connected to the nearby Orkney island’s power grid (250Kw) and networking infrastructure. The Orkney’s islands are located off the NE coast of the Scotland and its power grid is 100% renewable, using tidal, solar and wind power. During the data center test, Orkney was able was able to power the data center, the islands and still provide power back to the Scottish power grid.

More reliable underwater

According to early reports, the servers in the underwater data center had 1/8th the failures that a control data center, on land, had. Microsoft attributes the enhanced server reliability to the use of a 100% Nitrogen (at 1 atmosphere pressure) rather than normal air and the lack of any humans to jostle the equipment/disturb the environment.

It’s also likely that the temperature variability present in a normal, on the surface of the earth, data center was measurably less than for a data center on the sea floor. If this were true, that could also help explain its better reliability.

Why underwater?

It’s all about cooling modern servers (and storage). According to NREL ( USA National Renewable Energy Lab), most data centers operate at 1.8 PUE (power use efficiency) that is, using 180% of the power required for the servers, storage and networking equipment. The other 80% is used mainly for cooling electronics, but also includes lighting, HVAC, and other essential services for humans. NREL says that high efficiency data centers can achieve a PUE of 1.2.

PUE for Project Natick Phase 2 data center was reported to be 1.07. The only additional electricity needed would probably be power for cooling.

Cooling for the Project Natick Phase 2 data center used seawater pumped through the back of server racks. The data center was placed on the seafloor at 35m (117ft) deep.

It kind looked like a submarine. According to Microsoft, the data center was contracted for, built and deployed in under 90 days. The intent was to have the data center be smaller than a standard ISO shipping container. The data center was driven ontop of an 18 wheeler, from where it was built to the Orkney Island, including ferry crossings. It was placed on a triangular support, towed out to see and deposited on the seafloor.

While 864 servers and 27.5PB of storage seem like a lot to most of us, for Microsoft Azure it’s too small to be used as a regional zone. But for (large) edge deployments. something this size or (10X) smaller might be just the thing.

Microsoft notes that 1/2 the world’s population lives within 200km (120mi) of the ocean. So there’s a ready supply of people and businesses that could take advantage of any underwater data center.

And of course, such a structure when laid on the bottom of the ocean floor, could create an artificial reef (if left in place long enough). Artificial reefs have been made out of ocean oil rigs, sunken war ships and large chunks of steel/concrete. So a underwater data center could do so just as well. And maybe the heating coming from the data center cooling pumps would foster even more coral life.

Microsoft plans Project Natick Phase 3 to be a full Azure AZ that will be deployed underwater which will include about 12 Phase 2 datacenter pressurized units.

Photo Credits:

Storage that provides 100% performance at 99% full

A couple of weeks back we were talking with Qumulo at Storage Field Day 20 (SFD20) and they made mention that they were able to provide 100% performance at 99% full. Please see their video session during SFD20 (which can be seen here). I was a bit incredulous of this seeing as how every other modern storage system performance degrades long before they get to 99% capacity.

So I asked them to explain how this was possible. But before we get to that a little background on modern storage systems would be warranted.

The perils of log structured file systems

Most modern storage systems use a log structured file system where when they write data they write it to a sequential log and use a virtual addressing scheme to show where the data is located for that address, creating a (data) log of written blocks.

However, when data is overwritten, it leaves gaps in these data logs. These gaps need to be somehow recycled (squeezed out) in order to be able to be consumed as storage capacity. This recycling process is commonly called “garbage collection”.

Garbage collection does its work by reading heavily gapped log files and re-writing the old, but still current, data into a new log. This frees up those gaps to be reused. But garbage collection like this takes reading and writing of logs to free up space.

Now as log structured file systems get (70-80-90%) full, they need to spend more and more system time and effort (=performance) garbage collecting . This takes system (IO) performance away from normal host IO activity. Which is why I didn’t believe that Qumulo could offer 100% IO performance at 99% full.

But there was always another way to supply storage virtualization (read snapshotting) besides log files. Yes it might involve more metadata (table) management, but what it takes in more metadata, it gives back by requiring no garbage collection.

How Qumulo does without garbage collection

Qumulo has a scaled block store for a back end of their file and object cluster store. And yes it’s still a virtualized block store BUT it’s not a log structured file store.

It seems that there’s a virtual-to-physical mapping table that is used by Qumulo to determine the physical address of any virtual block in the file system. And files are allocated to virtual blocks directly through the use of B-tree metadata. These B-trees indicate which virtual blocks are in use by a file and its snapshots

If a host overwrites a data block. The block can be freed (if not being used in a snapshot) and placed on a freed block list and a new block is allocated in its place. The file’s allocated blocks b-tree is updated to reflect the new block and that’s it.

For snapshots, Qumulo uses something they call “write-out-of-place” process when data that a snapshot points to is overwritten. Again, it appears as if snapshots are some extra metadata associated with a file’s B-tree that defines the data in the snapshot.

The problem comes in when a file is deleted. If it’s a big enough file (TB-PB?), there could be millions to billions of blocks that have to be freed up. This would take entirely too long for a delete command, so this is done in the background. Qumulo calls this “reclaim delete“. So a delete of a big file unlinks the block B-tree from the directory and puts it on this reclaim delete work queue to free up these blocks later. Similarly, when a big snapshot is deleted, Qumulo performs a background process called “reclaim snapshot” for snapshot unique blocks.

As can be seen (it’s very hard to see given the coloration of the chart) from this screen shot of Qumulo’s session at SFD20, reclaim delete and reclaim snapshot are being done concurrently (in the background) with normal system IO. What’s interesting to note here is that reclaim IO (delete and snapshots) are going on all the time during the customers actual work. Why the write throughput drops significantly doing the the 27-29 of July is hard to understand. But the one case where it’s most serious (middle of July 28) reclaim IO also drops significantly. If reclaim IO were impacting write performance I would have expected it to have gone higher when write throughput went lower. But that’s not the case. From what I can see in the above reclaim IO has no impact on read or write throughput at this customer.

So essentially, by using a backing block store that does no garbage collection (not using a log structured file system), Qumulo is able to offer 100% system IO performance at 99% full – woah.

Open source ASICs – Hardware vs. Software innovation (round 5)

A good friend of mine sent me an article yesterday (Produce your own physical chips for free, in the open) that announced a collaboration between Google, Skywater Technology Foundry and FOSSi (Free and open source silicon) Foundation that ultimately supplies a completely open source set of tools to create ASICs at 130nm node ASIC level. The last piece of this toolkit was an open source PDK (Process Design Kit) data that was produced by Google-Skywater technologies and their offer for free fab services to manufacture chips that were designed with the tool set.

Layout snapshots of 2D and 3D ICs designed in 130-nm process technology: (a) 2D IC (2D-130); (b) the top and bottom tiers of a 3D IC using macro-level partitioning (3D-MP-130); and (c) the top and bottom tiers of a 3D IC using pipeline-level partitioning (3D-PP-130). 

The industry and I have had a long term discussion in this blog and elsewhere about the superiority of hardware innovation vs. software innovation using commodity hardware (e.g., see TPU and hardware vs. software innovation (Round 3) and Hardware vs. software innovation – Round 4). Most of the tech industry believes that software innovation on commodity hardware is better than hardware innovation. We beg to differ and in our mind, it’s the combination of hardware AND software innovation that is remaking the world.

Much of this can be seen with smart phone technology. The smart phone would not be possible without significant hardware innovation and has supplied ubiquitous computing for the world. That is it has connected billions to the internet that had no connection before.

But historically, hardware innovation has been hard to do, took a long time, and costs a lot vs. software innovation with commodity hardware, which by definition, is easier to do, takes almost no time (with continuous innovation even less) and costs almost nothing, especially when using open source.

The one innovation that emerged over the last few decades to make new hardware creation easier, has been the FPGA. FPGAs allow for “programing” hardware logic in the lab (sometime in the field) rather than having it be set in silicon in the fab. The toolchains to support FPGA programming can be proprietary but some are also available in open source. For example, SymbiFlow (open source) takes in Verilog (IEEE standard hardware definition language) and converts it into a binary bit stream used to program most (Xilinx-7 and Lattice) FPGAs.

But this recent announcement makes the process to create ASICs completely open source and much easier and cheaper to do

ASICs design flow

Prior to this announcement, most PDKs were expensive and specific to a particular FAB and process node. With Google’s and Skywater’s release of open source PDK (on GitHub) data, designers and engineers now have a completely open source tool kit that is they have RTL (Register-transfer-level, hardware description logic) design tools, EDA tools and PDK data to create their own ASICs. And with this toolkit Skywater together with Google will manufacture ASICs for you, at no cost.

The FOSSi dial up talk (embedded in the announcement above) goes into much detail about the FPGA and ASIC tool chain. but prior to this announcement the PDK data which is used to help the RTL and EDA tools simulate, verify and determine the optimum layout for the hardware design was always proprietary.

Open source RTL tools have been available for years now starting with OpenCores, OpenRISC, RISC-V and now OpenPower. RISC-V and OpenPower include RTL to implement sophisticade instruction set CPUs. OpenRISC is RTL for a precursor to RISC-V and OpenCores supplies the RTL for a number of other (CPU) cores. But this is just a sample of the RTL that’s available in open source.

EDA Tools are also available in open source. The most recent incarnation would be the DARPA funded, OpenROAD project. OpenROAD will ultimately provide a completely open source EDA Tool set for electronic design. The first component of this is a set of EDA tools that convert RTL to GDS II (industry standard graphical design stream description of a IC chip componentry and layout). GDS II streams are used to create masks for IC fabrication.

And now with the open source Google-Skywater PDK data, one has a complete open source tool chain to create ASICs at the 130nm node level for the Skywater Fab in Minnesota.

A PDK contains a lot of data about the ASIC fabrication process including process design rules, analog and digital design cells and models, behavioral models for analog and digital design, extracted data for simulation and other supporting functionality.

The Google-Skywater Technologies open source PDK is Apache 2.0 Licensed. The PDK is used in the SKY130 process node, which includes 130nm technologies, high voltage support, 5 metal layers and one interconnect layer.

At the moment the PDK includes standard digital cell support (“nor” gates, “and” gates, flip flops, etc.) but over time they are planning to add analog cells, IO & periphery cells, analog RF as well a fully automated design rule checking, with SRAM/flash build spaces.

The PDK does include standard SRAM bit cells and in combination with OpenRAM project one can use SRAM cells to create SRAM memory for the ASIC.

Google-Skywater are going to be fabricating, for free, up to 40 ASIC designs starting in Fall of 2020 and then six months later, they will start fabricating ~40 ASICs ever 3 months.

However to qualify for free fabrication, your design has to be completely open source (located on GitHub). To submit your ASIC you need to send your public GitHub URL repository to efabless and they will perform verification processes on it. If it works, they will respond with an email that it was accepted. If more than 40 designs are submitted for a run, the Google-Skywater team will decide on which 40 will be manufactured

The 16mm**2 ASIC automatically comes with a RISC-V CPU, RAM and power plus ~40 IOs. There is another 10mm**2 space for all of your ASIC specific logic. If successful, you will get back ~100 to 400 packaged chips.

~~~~

ASICs were always lengthy and costly to design and then fabrication took more money and time, before you got anything back to test. With Open source tool kits, design should no longer cost anything but engineering time and with the sophistication available is todays toolchain, should not be that lengthy. And if your one of the lucky 40 designs, ASIC fabrication is free. And then starting next year fabrication runs will occur every 3 months. So you could potentially get your design back in an ASIC in as little as 3 months.

And while the 130nm technology node dates back to 2001-2003, there were plenty of sophisticated ASICss made during those years (at a previous job, we did a couple ourselves). And of course, with your very own RISC-V CPU inside, you could pretty much do anything you want with your ASIC. Yeah RAM, SRAM and other constraints may limit you, but that’s what hardware innovation is all about, deal with the physical constraints but open up a whole new architectural world.

Welcome to the a new era of ASIC (hardware) innovation.

Photo Credit(s):

Software defined power grid

Read an article this past week in IEEE Spectrum (The Software Defined Power Grid is here) about a company that has been implementing software defined power grids throughout USA and the world to better integrate and utilize renewable energy alongside conventional power generation equipment.

Moreover, within the last year or so, Tesla has installed a Virtual Power Plant (VPP) using residential solar and grid scale batteries to better manage the electrical grid of South Australia (see Tesla’s Australian VPP propped up grid during coal outage). VPP use to offset power outages would necessitate something like a software defined power grid.

Software defined power grid

Not sure if there’s a real definition somewhere but from our perspective, a software defined power grid is one where power generation and control is all done through the use of programatic automation. The human operator still exists to monitor and override when something goes wrong but they are not involved in the moment to moment control of which power is saved vs. fed into the grid.

About a decade ago, we wrote a post about smart power meters (Smart metering’s data storage appetite) discussing the implementation of smart meters for home owners that had some capabilities to help monitor and control power use. But although that technology still exists, the software defined power grid has moved on.

The IEEE Spectrum article talks about a phasor measurement units (PMUs) that are already installed throughout most power grids. It turns out that most PMUs are capable of transmitting phasor power status at 60 times a second granularity and each status report is time stamped with high accuracy, GPS synchronized time.

On the other hand, most power grids today use SCADAs (supervisory control and data acquisition) to monitor and manage the power grid. But SCADAs only send data every 2-4 seconds. PMU’s are also installed in most power grids, but their information is not as important as SCADA to the monitoring, management and control of most (non-software defined) power grids.

One software defined power grid

PXiSE, the company in the IEEE Spectrum article, implemented their first demonstration project in Hawaii. That power grid had reached the limit of wind and solar power that it could support with human management. The company took their time and implemented a digital simulation of the power grid. But with the simulation in hand, battery storage and a off the shelf PC, the company was able to manage the grids power generation mix in real time with complete automation.

After that success, the company next turned to a micro-grid (building level power) with electronic vehicles, battery and solar power. Their software defined power grid reduced peak electricity demand within the building, saving significant money. With that success the company took their software defined power grid on the road to South Korea, Chile, Mexico and a number of other locations the world.

Tesla’s VPP

The Tesla VPP in South Australia, is planned to consists of up to 50K houses with solar PV panels and 13.5Kwh of batteries, able to deliver up to 250Mw of power generation and 650Mwh of power storage.

At the present time, the system has ~1000 house systems installed but even with that limited generation and storage capability it has already been called upon at least twice to compensate for coal generation power outage. To manage each and every household, they’d need something akin to the smart meters mentioned above in conjunction with a plethora of PMUs.

Puerto Rico’s power grid problems and solutions

There was an article not so long ago about the disruption to Puerto Rico’s power grid caused by Hurricanes Irma and Maria in IEEE Spectrum (Rebuilding Puerto Rico’s Power Grid: The Inside Story) and a subsequent article on making Puerto Rico’s power grid more resilient to hurricanes and other natural disasters (How to harden Puerto Rico’s power grid). The later article talked about creating micro grids, community PV and battery storage that could be disconnected from the main grid in times of disaster but also used to distribute power generation throughout the island.

Although the researchers didn’t call for the software defined power grid, it is our understanding that something similar would be an outstanding addition to their work there.

~~~~

As the use of renewables goes up and the price of batteries decreases while their capabilities go up over time, more and more power grids will need to become software defined. In the end, more software defined power grids with increasing renewables power generation and storage will make any power grid, more resilient and more fault tolerant.

Photo Credit(s):

Silq and QUA vie for Quantum computing language

Read a couple of articles this past week on new Quantum computing programing languages. Specifically, one in ScienceDaily on Silq, (The 1st intuitive programming language for quantum computers) and another in TechCrunch (Quantum Machines announces QUA, its universal lang. for quantum computing). The Silq discussion is based on an ACM SIGPLAN paper (Silq: A High-Level Quantum Language with Safe Uncomputation and Intuitive Semantics). programing

Up until this point there have been a couple of SDK’s for various quantum computers, most notably QASM for IBM’s, Q# for Microsoft’s and ? for Google’s Quantum Computers. We have discussed QASM in a prior post (see: Quantum Computer Programming post).

But both QUA and Silq are steps up the stack from QASM and Q#, both of which are more realistically likened to machine microcode thanassembly code. For example, with QASM you are talking directly to mechanisms to cohere qubits, electronics needed to connect qubits, electronics to excite qubit states, etc.


QUA and Silqs seem to take different tacks to providing their services.

Silq control flow
  • Silq is trying to abstract itself above the hardware layer and to provide some underlying logical constructs and services that any quantum programmer would want to use. Most notably, Silq mentions that they provide automatic erasure of intermediate calculations results which can impact future quantum calculations if they are not erased. They call this “specific uncomputation“. Silq also offers types, loops, conditionals, superposition (the adding together of two quantum states) and diffusion (spreading of quantum states out).
  • QUA on the other hand is Quantum Machines full stack implementation for quantum computer orchestration. QUA is only a one component of this stack (the highest level) but underneath this is a compiler and a Quantum Machine OPX box, a hardware appliance that interfaces with quantum computers of various types. There’s not much detail about QUA other than it offers conditionals and looping constructs and internal error detection.

From what I see, we are a long ways away from having a true programming language for quantum computers. Quantum Machines sees the problem with today’s quantum computers as the lack of a stack problem.

The Silq group see the problem with today’s quantum computers as a lack of any useful abstraction problem. Silq is trying to provide simpler semantics and control structures that maybe someday could become the foundation of a true quantum computing programming language.

Silq has compared itself to Q#, used in Microsoft’s Quantum Computing solution. So our guess is it works only with Microsoft’s quantum computer.

In contrast, QUA offers an orchestration solution for many different quantum computers but you have to buy into their orchestration hardware and stack.

Who will win out in the end is anyone’s guess. There’s a great need for something that can abstract the quantum hardware from the quantum algorithms being implemented. At the moment I like what I see in Silq just wish it was applied more generically.

At press time there were not many details available on Quantum Machines QUA language. Their stack approach may be better in the long run, but having to use their hardware appliance to run it seems counter productive.

~~~~

However, if the programming gods were to ask my opinion as to where a new programming language was really needed, I’d have to say neuromorphic computing (see Our neuromorphic chips a dead end? post). Neuromorphic computing really needs abstraction help. Without some form of suitable abstraction layer, neuromorphic computing seems dead as it stands.

Comments.

Picture Credit(s):

Thoughts on my first virtual conference

I attended a virtual event this week. It was scheduled to last 3 hours. But I only stayed for 2.5 Hours. Below I describe the event from my perspective and after that some notes on how it could be made better.

The virtual event experience

The event home page had a welcome video that you could start when you got there. I didn’t have any idea what to expect so this was nice. It could have spent time discussing the mechanics of the site and how to attend the event but it just was a welcome video, welcoming me to the event and letting me know they appreciated me being able to attend.

Navigation on the site wasn’t that easy to figure out at first. It was at the bottom of the page not at the top or the side. And the navigation home button brought up a list of videos that you could watch (or attend). And that page was in front of the conference page.

I launched the 1st (actually 2nd after the welcome video) which was the CEO keynote session. I thought this was good and the occasional interruption by executives ringing the CEO’s doorbell asking for toilet paper was entertaining. Again he welcomed us to the event and discussed how the pandemic has changed their world and ours. He thanked the customers in attendance and made brief mention of the video (tracks) that one could follow. I don’t recall but the CEO keynote didn’t seem to have any (or many) slides during his session it was just like an informal talk (but) scripted.

It took me a while to figure out how to get back to the main agenda page but once there I proceeded on my chosen track to watch the next video. When I was finished with that I watched the other 3 track videos. The video tracks were not as good as the CEO keynote session and some of them had many more slides than they needed.

They also had a customer interview with an exec which was great and well done. Especially given it seemed to have been recorded over the prior 48 hours.

Somewhere in all of this, I happened to reach the Expo floor. It had a series of technical break out sessions and then the exhibitor buttons which had their own videos, reports, webinars that one could watch/read.

I watched most of the technical breakouts (at least part way through). The tech breakouts were ok, but also had mixed quality as I remember it. That is some having more or less slides and more or less webinar like.

I also watched a few of the exhibitor videos. Some of these auto started when you clicked on their expo buttons, some did not. Some videos were very loud while others were fine.

I’d say the mixed quality of the exhibits were similar to what one might see at any conference with bigger vendors having more polished content while smaller vendors had less polished content.

The conference had a public chat channel but there was one channel for the whole conference and it didn’t appear until much later (maybe when I entered the first breakout sessions or expo “hall”)

How to make our next virtual conference better

Below are my thoughts on ways to improve the virtual conference experience.

• Have real scheduled times to watch the videos/webinars/tech sessions. Yes there all online and can truly be watched at any time you want. But I expected a scheduled agenda with breaks between sessions and to have to pick which one I wanted to go to, meaning that some would have to be unattended. I would suggest that the videos only be available during the event at scheduled time slots and that the event organizers build in breaks between each session. They could always be made available at a later date under a conference media page for further viewing but having them scheduled to run in a conference room would make it more conference like. The tracks could be scheduled in other side rooms of the conference.

• Also, would it be too much to ask that they have some sort of video roll call of participants with headshots and maybe a title. Something akin to a conference badge. Perhaps they could show this during the breaks between sessions. Even if you rolled through the virtual badge shots quickly, during breaks, it would act as sort of an analog of walking from one session to another.

• I don’t know whether there was any interest in social media, but having a twitter, facebook, other social media event hash tag prominently displayed on the bottom 1/3rd or on some early slide deck would have been useful. To generate some social buz

• Also, at conferences, one can typically see a screen which tracks the social media hash tag. I saw none of this at the event. Having some small panel running social media activity might have led to more social media interaction. It could be along the side of the main page, viewable during all videos, breaks and other sessions.

• As for the public chat. I think it would have been better to have a separate chat channels for each video, breakout, exhibit, etc. rather than having a single chat room for the whole conference. It would have been great if the separate chat window popped up when you started viewing a video, breakout or entered an exhibit.

• Have lots more technical breakouts. didn’t see a great quantity of these maybe 5-7 tech breakouts and the 4 original tech track videos. Again separate chat channels so one could ask questions pertaining to the session would have been great.

• The exhibits were all other vendors (sponsors) showing there stuff. I didn’t see any show and tell for the conference event organizers that one would see in any conference if you walked out on the show floor. Would it have been to much to ask to have a virtual walk through tour of each of the conference organizers products and a couple of demos of their products/services. Just like one could see at any conference.

• The expo floor exhibitor sessions could be left available to view anytime the event was “open” but the tech breakout sessions would be available multiple times a day but scheduled just like any other event sessions. And it would be nice to have a separate chat channel for each expo exhibitor and tech break out sessions., so we could ask questions of their staff.

• Another thing available at most conference events is a social media booth where bloggers, podcasters, and vloggers could sit around and talk about the event and their products and whatever else came to mind. I didn’t see anything like this and having a separate chat window for these booths would be useful.

• Also, it would be nice if one could obtain vendor certifications or a detailed tutorials on some product/service.

• On a personal note, I am an industry analyst it would be nice to have a separate analyst track. I come to these events to have face time with execs and get a download on what their upcoming strategy is and how they did over the last year or so. Yes these could all be done offline but they could also be accomplished during the event with its own secure chat channel

• I’m also an influencer. So having a separate press track would have been great as well. Often the analyst and press track overlap for a couple of sessions and then go there separate (NDA) ways.

• For both the analysts and the Press/influencers having a live Q&A session with the execs, technical team, and select customers would have been great. But alas there was nothing like this. But with a separate secure chat room this could have also been done.

• I can’t stress enough that the conference event navigation needs to be better and more intuitive.

I know that there’s a lot here and there’s probably a whole bunch more that could be done better. Other people will no doubt have their own opinions. But these are mine.

It was the first virtual conference (I attended) and the vendor sort of played iit by ear and designing it almost in real time. Given all that, they did a great job. Now it’s time to do better.

I’m a conference geek. I go to an average of 10 or more vendor conferences a year so this is a major part of what I do.

IMHO, nothing besides ubiquitous, true virtual reality will ever replace the effectiveness of in real life conferences. That being said, there are ways to make current virtual events come closer to real conferences.

~~~~

I thought about sending this to the conference organizers but their conference is over, and hopefully next year it will be back IRL. But there’s plenty more virtual conferences left on my schedule for this year.

I would prefer all of them to be done better, for me, analysts, press/influencers and ultimately customers.

Were all in this together.

Comments.