Internet of Tires

Read an article a couple of weeks back (An internet of tires?… IEEE Spectrum) and can’t seem to get it out of my head. Pirelli, a European tire manufacturer was demonstrating a smart tire or as they call it, their new Cyber Tyre.

The Cyber Tyre includes accelerometer(s) in its rubber, that can be used to sense the pavement/road surface conditions. Cyber Tyre can communicate surface conditions to the car and using the car’s 5G, to other cars (of same make) to tell them of problems with surface adhesion (hydroplaning, ice, other traction issues).

Presumably the accelerometers in the Cyber Tyre measure acceleration changes of individual tires as they rotate. Any rapid acceleration change, could potentially be used to determine whether the car has lost traction due and why.

They tested the new tires out at a (1/3rd mile) test track on top of a Fiat factory, using Audi A8 automobiles and 5G. Unclear why this had to wait for 5G but it’s possible that using 5G, the Cyber Tyre and the car could possibly log and transmit such information back to the manufacturer of the car or tire.

Accelerometers have become dirt cheap over the last decade as smart phones have taken off. So, it was only a matter of time before they found use in new and interesting applications and the Cyber Tyre is just the latest.

Internet of Vehicles

Presumably the car, with Cyber Tyres on it, communicates road hazard information to other cars using 5G and vehicle to vehicle (V2V) communication protocols or perhaps to municipal or state authorities. This way highway signage could display hazardous conditions ahead.

Audi has a website devoted to Car to X communications which has embedded certain Audi vehicles (A4, A5 & Q7), with cellular communications, cameras and other sensors used to identify (recognize) signage, hazards, and other information and communicate this data to other Audi vehicles. This way owning an Audi, would plug you into this information flow.

Pirelli’s Cyber Car Concept

Prior to the Cyber Tyre, Pirelli introduced a Cyber Car concept that is supposedly rolling out this year. This version has tyres with real time pressure, temperature, (static) vertical load and a Tyre ID. Pirelli has been working with car manufacturers to roll out Cyber Car functionality.

The Tyre ID seems to be a file that can include anything that the tyre or automobile manufacturer wants. It sort of reminds me of a blockchain data blocks that could be used to validate tyre manufacturing provenance.

The vertical load sensor seems more important to car and tire manufacturers than consumers. But for electrical car owners, knowing car weight could help determine current battery load and thereby more precisely know how much charge is left in a battery.

Pirelli uses a proprietary algorithm to determine tread wear. This makes use of the other tyre sensors to predict wear and perhaps uses an AI DL algorithm to do this.

~~~

ABS has been around for decades now and tire pressure sensors for over 10 years or so. My latest car has enough sensors to pretty much drive itself on the highway but not quite park itself as of yet. So it was only a matter of time before something like smart tires would show up.

But given their integration with car electronics systems, it would seem that this would only make sense for new cars that included a full set of Cyber Tyres. That is until all tire AND car manufacturers agreed to come up with a standard protocol to communicate such information. When that happens, consumers could chose any tire manufacturer and obtain have similar if not the same functionality from them.

I suppose someone had to be first to identify just what could be done with the electronics available today. Pirelli just happens to be it for now in the tire industry.

I just don’t want to have to upgrade tires every 24 months. And, if I have to wait a long time for my car to boot up and establish communications with my tires, I may just take a (dumb) bike.

Photo Credit(s):

Made in space

Read an article in IEEE Spectrum recently titled, 4 Products it makes sense to manufacture in space. The 4 products identified in the article include:

1) Metal alloys – because of micro-gravity, the mixture of metals that go into metal alloys should be much more even and as a result, should create a purer mixture of the metal alloy at the end of the process.

2) Fibre optical cables – the article says, ZBLAN, which is a heavy-metal fluoride glass fibre could have 1/10th the signal loss of current cable but is hard to manufacture on earth due to micro-crystal formation. Apparently, when manufactured (mixed-drawn) in micro-gravity, there’s less of this defect in the glass.

3) Printed human organs – the problem with printing biological organs, hearts, lungs, livers, etc. is they require scaffolding for the cells to adhere to that needs to be bio-degradeable and in the form of whatever organ is needed. However, in micro-gravity there should be less of a need for any scaffolding.

4) Artificial meat – similar to human organs above, by being able to build (3D print) biological products, one could create a steak or other cuts of meat that biological #D printing.

Problems with space manufacture

One problem with manufacturing metal alloys and fibre optic cable in space, is the immense heat required. Glass melts at 1400C, metals anywhere from 650C to 3400C. Getting rid of all that heat in space could present a significant problem. Not to mention the vessels required to hold molten materials weigh a lot.

And metal and glass manufacturing processes can also create waste, such as hot metal/glass particulates that settle on the floor on earth, but who knows where in space. To manufacture metal or glass on ISS would require a very heat tolerant, protected environment or capsule, lots of power to provide heat and radiator surfaces to release said heat.

And of course, delivering raw materials for metals and glass to space (LEO) would cost a lot (SpaceX $2.7K/kg , Atlas V $13.2K/kg). As such, the business case for metal alloy manufacturing in space doesn’t appear positive.

But given the reduced product weight and potentially higher prices one can charge for the product, fibre pptical glass may make business sense. Especially, if you could get by with 1/10th the glass because it has 1/10th the signal loss.

And if you don’t have to ship raw materials from earth (using the moon or asteroids instead), it would improvesboth business cases. That is, assuming raw material discovery and shipping costs are 1/6th or less as much as shipping from earth.

As for organs, as they can’t be manufactured on earth (yet), it could be the “killer app’ for made in space. But it’s sort of a race against time. Doing this in space may be a lot easier today but more research is going on to create organs on earth than in space. But eventually, manufacturing these on earth could be a lot cheaper and just as effective.

But I don’t see a business case for meat in space unless it’s to support making food for astronauts on ISS. Even then, it might be cheaper to just ship them some steak.

Products hard to make in space

I would think anything that doesn’t require gravity to work, should be easier to produce in space.

But that eliminates distillation, e.g., fossil fuel refining, fermentation, and many other chemical distillation processes (see Wikipedia article on Distillation).

But gravity is also used in depositing and holding multiple layers onto one another. So manufacturing paper, magnetic/optical disk platters, magnetic tapes, or any other product built up layer by layer, may not be suitable for space manufacture.

Not sure about semiconductors, as deposition steps can make use of chemical vapors. And that seems to require gravity. But it’s conceivable that in the absence of gravity, chemicals may still adhere to the wafer surface, as it’s an easier location to combine with than other surfaces in the chamber. On the other hand, they may just as likely retain their mixture in the vapor.

Growing extremely pure silicon ingots may be something better done in space. However, it may suffer from the same problems as metal alloy manufacturing. Given the need for extreme purity and the price paid for pure silicon, I would think this would be something to research ahead of metal alloys.

For further research

But in the end, if and when we become a space fairing people, we will need to manufacture everything in space. As well as grow or find raw materials easier than shipping them from the earth.

So, some research ought to be directed on how to perform distillation and multi-layer product manufacturing in space/micro-gravity. Such processes could potentially be done in a centrifuge, if they truly can’t be gone without gravity.

It’s also unclear how to boil any liquid in 0g or micro-g without convection (see Bizarre Boiling NASA Science article). According to the article, it creates one big bubble that stays where it is formed. Providing some way to extract this bubble in place would seem difficult. Boiling liquids in a centrifuge may work.

In any case, I’m sure the ISS crew would be more than happy to do any research necessary to figure out how to brew beer, let alone, distill vodka in space.

Picture Credit(s):

Supercomputing 2019 (SC19) conference

I was at SC19 last week and as always there was lots to see on the expo floor and at the show in general. Two expo booths that I thought were especially interesting were:

  • Zapata Computing systems – a quantum computing programming for hire outfit and
  • Cerebras – a new AI wafer scale accelerator chip that sported 400K+ cores in a single package.

Zapata Computing, quantum coding for hire

We’ve been on a sort of quantum thread this past month or so (e.g., see our Quantum computing – part 2 and part 1, The race for quantum supremacy posts). Zapata Computing was at the edge of the exhibit floor in a small booth pretty much just one guy (Michael Warren) and their booth with some handouts. Must have had something on the booth about quantum computing, because I stopped by

Warren said they have ~20 PhDs, from around the world working for them and provide quantum coding for hire. Zapata works with organizations to either get them up to speed on quantum programing or write quantum programs themselves under contract for clients and help run them on quantum computers.

Zapata’s quantum algorithms are designed to run on any type of quantum computer such as ion trap, superconducting qubit, quantum annealers, etc. They also work with Microsoft Azure Quantum, IBM Q, Rigetti, and Honeywell systems to run quantum programs for customers. Notably missing from this list was Google and Honeywell is new to me but seem active in quantum computing.

Zapata has their own Orquestra quantum toolkit. We have discussed quantum software development kits like IBM Q Qiskit previously but Microsoft has their own, QDK and Rigetti has Forrest SDK. So, presumably, Orquestra front ends these other development kits. Couldn’t find anything on Honeywell but it’s likely they have their own development kit as well or make use of others.

In talking to the Warren at the show, Zapata is working to come up with a quantum computing cloud, which can be used to run quantum code on any of these quantum computers with the click of a button. Warren sounded like this was coming out soon.

Some of the Zapata Computing quantum programs they have developed for clients include: logistic simulations, materials design, chemistry simulations, etc.

Warren didn’t mention the cost of running on quantum computers but he said that some companies are more forthright with pricing than others. It seemed Rigetti had a published price list to use their systems but others seemed to want to negotiate price on a per use basis.

It seems only a matter of time before quantum computing becomes just like GPUs. Just another computational accelerator that works well for some workloads but not others. Zapata Computing and Orquestra are just steps along this path.

Cerebras

AI accelerator chips have also been a hot topic for us (see our posts on Google TPU, GraphCore’s system, and the Mythic’s and Syntiant’s AI accelerators). But none,. with the possible exception of GraphCore, has taken this on to quite the same level as Cerebras.

Cerebras offers a wafer scale chip that is embedded into their CS-1 system. The chip has 400K cores, 18GB of (very fast) SRAM (memory), 100Pb/sec (peta-bits or 10**15 bits per second) of bandwidth and draws ~20kW. Their CS-1 system fits in a standard rack taking up 15U of space.

The on-chip fabric is called SWARM which supports a 2D mesh. The SWARM mesh is entirely configurable, to support optimal neural network connectivity. I assume this means that any core can talk directly (with 0 hops) to any other core on the chip through a configuration setup.

The high speed on chip SRAM supports up to 9PB/sec of memory bandwidth and can be accessed in a single clock cycle. They call the cores Sparse Linear Algebra Compute (SLAC) cores and say that they are optimized to support ML-DL computations, which we assume meansfloating point aritmetic.

Although you can’t really see the (wafer scale) chip in the picture above, it’s located in the section between the copper plate and the copper heat sink and is starts at the copper line between the two. CS-1 consumes a lot of power and much of its design is to provide proper cooling. One can view some of that on the left side of the picture above.

As for software, Cerebras CS-1 supports TensorFlow and PyTorch as well as standard C++. Their Cerebras Software Platform stack, consists of two layers: the Cerebras Intermediate Representation and Cerebras Graph Compiler (CGC) that feeds their Cerebras Wafer Scale Engine (WSE). The CGC maps neural network nodes to cores on the WSE and probably configures SWARM to provide NN core to NN core connectivity.

It’s great to see hardware innovation again. There was a time where everyone thought that software alone was going to kill off hardware innovation. But the facts are that both need to innovate to take computing forward. Cerebras didn’t tell me any PetaFlop rate for their system and but my guess it would beat out the 2PFlop GraphCore2 (GC2) system but it’s only a matter of time before GC3 comes out. That being said, what could be beyond wafer scale integration?

~~~~

I enjoy going to SC19 for all the leading edge technology on display. They have some very interesting cooling solutions that I don’t ever see anywhere else. And the student competition is fun. Teams of students running HPC workloads around the clock, on donated equipment, from Monday evening until Wednesday evening. With (by SC19) spurious fault injection to see how they and their systems react to the faults to continue to perform the work needed.

For every SC conference, they create an SCinet to support the show. This year it supported Tb/sec of bandwidth and the WiFi for the floor and conference. All the equipment and time that goes into creating SCinet is donated.

Unfortunately, I didn’t get a chance to go to keynotes or plenary sessions. I did attend one workshop on container use in HPC and it was completely beyond me. Next years, SC20 will be in Atlanta.

Photo Credit(s):

Data analysis of history

Read an article the other day in The Guardian (History as a giant data set: how analyzing the past could save the future), which talks about this new discipline called cliodynamics (see wikipedia cliodynamics article). There was a Nature article (in 2012), Human Cycles: History as Science, which described cliodynamics in a bit more detail.

Cliodynamics uses mathematical systems theory on historical data to predict what will happen in the future for society. According to The Guardian and Nature articles, the originator of cliodynamics, Peter Turchin, predicted in 2010 that the world would change dramatically for the worse over the coming decade, with violence peaking in 2020.

What is cliodynamics

Cliodynamics depends on vast databases of historical data that has been amassed over the last decade or so. For instance, the Seshat Global History Databank (started in 2011, has 3 datasets: moralizing gods, axial age history [8th to 3rd cent. BCE], & social complexity), International Institute of Social History (est. 1935, in 2013 re-organized their collection to focus on data, has 33 dataverses ranging from data on apprenticeships, prices and wage history, strike history of various countries and time periods, etc. ), and Google NGRAM viewer (started in 2010, provides keyword statistics on Google BOOKs).

Cliodynamics uses the information from databases like the above to devise a mathematical model of the history of the world. From their mathematical model, cliodynamics researchers have discerned patterns or cycles in human endeavors that have persisted over centuries.

Cliodynamic cycles

Two of cycles of interest come to mind:

  • Secular cycle – this plays out over 2-3 centuries and starts out with a new egalitarian society that has low levels of inequality where the supply and demand for labor are roughly equal. Over time as population grows, the supply of labor outstrips demand and inequality increases. Elites then start to battle one another, war and political instability results in a new more equal society, re-starting the cycle .
  • Fathers and sons cycle – this plays out over 50 years and starts when the (fathers) generation responds violently to social injustice and the next (sons) generation resigns itself to injustice (or hopefully resolves it) until the next (fathers) generation sees injustice again and erupts violently re-starting the cycle over again. .

It’s this last cycle that Turchin predicted to peak again in 2020, the last one peaking in 1970 and the ones before that peaking in 1920 and 1870.

We’ve seen such theories before. In the 19th and 20th centuries there were plenty of historical theorist. Probably the most prominent was Marx but there were others as well.

The problem with cliodynamics, good data

Sparsity and accuracy of data has always been a problem with historical study. Much information is lost through natural or manmade disasters and much of what’s left is biased. Nonetheless, more and more data is being amassed of a historical nature every day, most of it quantitative and suitable to analysis.

Historical data, where available, can be assessed scientifically, and analyzed by using current tools such as data analytics, machine learning, & deep learning to ascertain trends and make predictions. And the more data available, the more accurate these analyses and predictions can become. Cliodynamics pre-dates much of these tools. but that’s no excuse for not to taking advantage of them.

~~~~

As for 2020, AI, automation and globalization has led and will lead to more job disruption. Inequality is also on the rise, at least throughout much of the west. And then there’s Brexit, USA elections and general mid-east turmoil that seems to all be on the horizon.

Stay tuned, 2020 seems only months away.

Photo Credits:

From Key Historic Figures of WW1 article, Mansell/Ghetty Images, (c) ThoughtCo

Anti War March (1968 Chicago) By David Wilson , CC BY 2.0, Link

Eleven times Americans have marched on Washington, (1920, Washington DC) (c) Smithsonian Magazine

Cambrian Explosion of AI DL app’s in industry and the world

I was at the NetApp Insight conference last week and recorded a podcast (see: GreyBeards Podcast) on what NetApp is doing in the AI DL (Deep Learning) space. On the podcast, we talked about a number of verticals that were deploying AI DL right now and using it to improve outcomes.

It was only is 2012 that AI DL broke out and pretty much conquered the speech recognition contest by improving recognition accuracy by leaps and bounds. Prior to that improvements had been very small and incremental at best. Here we are, just 7 years later and AI DL models are proliferating across industry and every other sector of the world economy.

DL applications in the real world

At the show. we talked about AI DL models being used in healthcare (radiological image analysis, cell counts for infection assessments), automotive (self driving cars), financial services (fraud detection), and retail (predicting how make up would look on someone).

And early this year, at HPE Discover, they discussed a new technique to share training data but still keep it private. In this case, they use block chain technology to publish and share a DL neural network model weights and other hyper parameters trained for some real world purpose.

Customers download and use the model in their day to day activities but record the data that their model analyzes and its predictions. They use this data to update (re-train) their DL neural net. They then publish their new neural net model weights and other parameters to all the other customers. Each customer of the model do the same, updating (re-training) their DL neural net.

At some point an owner or global model arbitrator takes all these individual model updates and aggregates the neural net weights, into a new neural net model and publishes the new model. And then the process starts over again. In this way, training data is never revealed, kept secure and private but DL model updates that result from re-training the model with secured private data would be available to any customer.

Recently, there’s been a slew of articles across many different organizations that show how AI DL is being adopted to work in different areas:

And that’s just a sample of the last few weeks of papers of AI DL activity.

Next Steps

All it takes is data, that can be quantified and classified. With data and classifications in hand, anyone can train a DL model that performs that classification. It doesn’t require GPU farms, decent CPUs are up to the task for TB of data.

But if you want better prediction/classificatoin accuracy, you will need more data which means longer AI DL training runs. So at some point, maybe at >100TB of data, or use AI DL training a lot, you may want that GPU farm.

The Deep Learning with Python book (my favorite) has a number of examples such as, sentiment analysis of text, median real estate pricing predictions, generating text that looks like an authors work, with maybe a dozen more that one can use to understand AI DL technology. But it’s not rocket science, I believe any qualified programmer could do it, with some serious study.

So the real question is what are you doing with your data to make use of AI DLmodels now?

I suppose the other question ought to be, how can you collect more data and classification information, to train more AI DL models?

~~~~

It’s great to be in the storage business.

Photo Credit(s):

Quantum computing – part 2

Held a briefing recently with IBM, to discuss their recent Q Systems One announcements and their Quantum Compting Cloud. IBM Q Systems One currently offers 10 quantum computers usable by any IBM Q premiere partners. Not clear what in costs, but there aren’t many available for use anywhere else. .

On the call they announced the coming availability of their 53 qubit QC to be installed at their new IBM Quantum Computing Center in Poo’keepsie, New York. The previous IBM Q systems were all located in Yorktown Heights center and their prior largest systems were 20 qubits.

At this time in the industry running quantum computing algorithms generate probabilistic results and as such need to be run many times to converege onto a reliable answer. IBM calls these runs quantum shots. It’s not unusual to run 1000 quantum computing shots to generate a solution to a quantum algorithm. They call running any number of quantum shots to generate an answer a quantum experiment or computation

On the call they mentioned besides the 160K registered users they have for IBM Q Systems One, there’s also over 80 commercial partners. IBM is moving to a cloud version of their development kit Qiskit which should start slowing their download counts down.

Last week in our race for quantum supremacy post we discussed Google’s latest quantum computer with 54 qubits. In that system each qubit had 4 connections (one to each of its nearest neighbors). IBM decried the lack of a peer-reviewed research paper on their technology which made it difficult to understand.

NISQ era

The industry describes the current quantum computing environment as Noisy Intermediate-Scale Quantum (NISQ) technology (for more information see Quantum computing in the NISQ era and beyond paper). The hope is that someday we’ll figure out how to design a quantum computer that has sufficient error correction, noise suppression, and qubit connectivity to perform quantum computing without having to do 1000 quantum shots. Although, quantum mechanics being a probabilistic environment, I don’t foresee how this could ever be the case.

IBM appears to be experimenting with other arrangements for their qubit connections. They say the more qubit connections the noisier the system is, so having fewer but enough to work, seems to be what they are striving for. Their belief is that smart (quantum computing) compilers can work with any qubit connectivity topology and make it work for a problem. It may generate more gates, or a deeper quantum circuit but it should result in equivalent answers. Kind of like running a program on X86 vs. ARM, they both should generate the same answers, but one may be longer than the other.

At the moment IBM has 10 QCs that are going to be split between Poo’keepsie and Yorktown Heights (IBM research), in Yorktown 3-20qubit QC, 1-14qubit QC, as well as 1-5qubit QC and in Poo’keepsie 2-20qubit QC as well as 3-5qubit QC. As I understand it, all of these are available for any IBM Q registered users to run (not sure about the cost though).

The new IBM Q System

The new IBM Q System One 53 qubit system is arrayed with mostly 2 qubit connections and only a select few (11) having 3 qubit connections. This seems to be following down the path of their earlier Q Systems that had similar levels of connectivity between quits.

Quantum Supremacy vs. Quantum Volume

Rather than talking about Quantum Supremacy, IBM prefers to discuss Quantum Advantage and it’s all about Quantum Volume (QV). QV provides a measure of how well quantum computers perform real work. There’s many factors that go into QV and IBM offers a benchmark in their toolkit that can be run on any gate-based, quantum computer to measure quantum volume.

IBM plans to double QV every year and in a decade, they believe they will have achieved Quantum Advantage. Current IBM Q Systems One QC have a QV of 8 or 16. IBM claims that if they can double QV every year they should reach Quantum Advantage in a decade (2029)!

They have yet to measure QV on their new, as yet to be built, 53 qubit quantum computer. But they say, given all the factors, over time the same quantum computer can have different QV. And even though IBM’s 53 qubit quantum computer will be built shortly, some of the components will likely change over time (e.g. microwave drivers) as they find better technologies to use, which will impact QV.

Qiskit, IBM’s open source, quantum computing stack

There was lots of talk about Qiskit being opensource (Qiskit GitHub). This means anyone can take advantage of it to generate solutions for their own quantum computer.

It’s written in Python3 and consists of 4 components:

Ignis – which provides characterization tools and error mitigation tools

Aqua – which provides a framework for users to leverage QC to perform algorithms and provides current QC algorithms that have been proved out to work

Aer – which provides special purpose (classical computing) simulators for QC algorithms to run on

Terra – which seems to be the low level programming for IBM’s QC, presumably this would need to be customized for every quantum computer.

IBM seems to be trying to make Qiskit easily usable. They have created a Youtube video series on it, with Qiskit API and other educational resources, to make quantum computing on IBM Q Systems One more widely usable..

IBM also announced they will add another Quantum Computing center at an yet to be announced, IBM facility in Germany. It seems that IBM wants to make quantum computing something everyone, across the world, can use.

~~~

Some would say we are very early in the use of quantum computing technology. On the other hand, Google, IBM and others companies are using it to perform real (if not theoretical) work. The cloud seems like a great vehicle for IBM to introduce this sort of technology to a wider audience.

Luckily, I have 10 more years to change out all my data encryption to a “quantum proof ” encryption algorithm. Hopefully, one can be found by then.

Photo Credit(s):

Screen shots from IBM’s Q Systems One briefing...

The race for quantum supremacy

Last week, there were a number of reports on Google having achieved quantum supremacy (e.g, see Rumors hint of Google achieving quantum supremacy ScienceNews). The technical article behind the reports was taken down shortly after having appeared. Unclear why but some of their quantum computing rivals have declared this as a true breakthrough to just a publicity stunt.

Using BING, I was able to find an (almost readable) copy of the paper on FileBin and cached copy on SpaceRef.

What Google seems to have done is to implement in a new quantum processor, a (pseudo) random number generator, a random sampling algorithm and an algorithm that verifies randomness based on the sample. he pseudo-random number generator was a quantum circuit, and the sampling another quantum circuit. Randomness verification was done by computing the probability of a bit string (random number) appearing in a random number sequence sample and then verifying that that bit string did appear that often. This is a simple problem for classical computers to solve for a few random numbers but get’s increasingly hard the more random numbers you throw at it.

What is quantum supremacy?

Essentially, quantum supremacy states that a quantum computer can perform a function much faster than a standard digital computer. By much faster we are talking many orders of magnitude faster. The articles noted above state that the Google Quantum computer did the algorithm in minutes, that would take 1M cores, 10,000 years to perform on digital processors.

Quantum supremacy applies an algorithm at a time. There’s a class of problems that are considered NP (non-deterministic polynomial [time]) complete which can only be solved on classical, digital computers using brute force algorithms (e.g., algorithms that check every possible path) which can take forever for many path problem spaces (see NP-completeness, wikipedia)

Quantum computing, because of it’s inherent quantum ness characteristics, should be able to test every path of an NP complete problem in parallel. As a result a quantum processor should be able to solve any NP complete problem in a single pass.

Quantum supremacy simply states that such a quantum computer and algorithm have been implemented to solve an NP complete problem. The articles say that Google has created a quantum computer and algorithm to do just that.

Google’s Sycamore quantum computer

Google’s research was based on a new quantum computing chip, called the Sycamore processor, which supports a two dimensional array of 54 (transmon) qubits, where each qubit is “tunably” coupled to its nearest 4 neighbors. In the figure the “A” qubit can be coupled to it’s 4 nearest neighbor ‘a” qubits via tuning. 1 qubit failed, leaving 53 qubits for the computation.

The system is cooled to reduce noise and to enable super conductivity. The qubits are written by two mechanisms, a microwave drive to excite the qubit and a magnetic flux inducer to control its frequency. Qubits are read via a linear resonator which is connected to each qubit. All qubits can be read simultaneusly by a frequency multiplexing approach and are digitized to 8 bits @1GS/s (giga-samples/second, so 1GB/sec).

The Sycamore quantum architecture could generate 1113 single-qubit gate or 430 dual-qubit gate circuits. This inherently constrains the algorithmic complexity or problem state space that it could solve.

Qubit state, tunable couplers and quantum processor controls are written via 277 digital to analog converters in 14 bits [words] @1GS/s (or ~1.75GB/s). A single cycle of a Sycamore single-qubit circuit can be stimulated by a 25ns microwave pulse (with no qubit to qubit coupling active).

A key engineering advance of the Sycamore systems was achieving high-fidelity single- and dual-qubit operations, which means that they were completed correctly

What the Sycamore quantum computer accomplished

They generated 30M random numbers (with m=20 bits/random number) and then randomly sampled 1M of them, computed the probability of their occurrence and verified that that number was seen (within some error threshold). The random number generation, sampling and probability computation/verification took 200 seconds on the Sycamore system. They said that most of that activity was input and output and that the Sycamore quantum computer was only busy for 30 seconds working on the problem.

In comparison, the Google team benchmarked GCP processors and the Oak Ridge National Labs Summit supercomputer complex doing a portion of the algorithm using standard digital functionality and have concluded it would have taken 10K years on 1M processing cores to perform the same function using digital computers.

Although the problem to generate vast numbers of pseudo-random numbers, randomly sample them and verify they meet proper probability distributions may not be that useful, it does represent an NP-complete problem. Many NP-complete problems can be solved with the same algorithm, so there’s a good probability that similar NP complete problems may also be solvable with the Sycamore quantum computer.

1M cores for 10K years seems much too long

However, it doesn’t seem to me to take that long to generate pseudo random numbers, sample them and validate their probabilities were correct.

It took me ~1 sec to generate 10,000 (probably 32 bit but could be 64 bit) =RAND() numbers in Microsoft Excel, so 30M would take take about 15K 3K seconds on my desktop or about 4.2 hrs 50min.. Generating a random sample of 1M should take another 100 seconds (to generate another 1M random numbers), creating indexes out of these another 100 seconds or so, then accessing these 1M random numbers in the 30M random number list should take another 100 seconds (maybe) so add another 5 min to that. Verifying that the sampled numbers met all the random number probabilities it should ], is another question but here we are only dealing with 1M pseudo random numbers so it shouldn’t take that long to compute its frequency occurence and its probability to validate randomness. Yes, Excel =rand() function is probably not the best pseudo random number generator but it’s typical of what exists for digital computers and even if it took twice as long to generate better pseudo random numbers , it’s still not anywhere near 1M cores for 10K years.

But then again I’m no numerical specialist and even at 30 seconds vs 15.2K 3.3K + ? (for probability calculation + frequency verification), so multiply this by 5X or 16.5K seconds, makes the Sycamore quantum computer, ~1000X ~550Xfaster or ~2.5 orders of magnitude faster.

In my mind, Sycamore and its quantum algorithm is more of a proof of concept. The Sycamore processor proves that a) quantum computer with 53 qubits can solve some real world NP-complete problems and b) that the more qubits that quantum computing industry provides the more sophisticated NP complete problems it will be able to solve. Just not sure this is actually a 13 16 orders of magnitude speedup.

[My math was off but I tried to fix it in these last three paragraph, Ed.]

~~~~

Let’s see if a 20 bit NP complete problem can be solved in a quantum computer with 53 qubits, how many qubits should it take to solve a 256 bit NP complete problem (e.g. cracking an AES-256 bit encryption), maybe ~680 qubits. Phew, my AES encrypted data are safe, at least for the moment.

Photo Credit(s): From ScienceNews article about achieving quantum supremacy

From SpaceRef cached copy of report

Quantum computing NNs

As many who have been following our blog know, AI, Machine Learning (ML) and Deep Learning (DL) (e.g. see our Learning machine learning – part 3, & Industrial revolution deep learning & NVIDIA’s 3U supercomputer, AI reaches a crossroads posts), have become much more mainstream and AI has anointed DL as the best approach for pattern recognition, classification, and prediction, but has applicability beyond that.

One problem with DL has been it’s energy costs. There have been some approaches to address this, but none have been entirely successful (e.g. see Intel new DL Boost, New GraphCore GC2 chips, AI processing at the edge posts) just yet. At one time neuromorphic hardware was the answer but I’ve become disillusioned with that technology over time (see Are neuromorphic chips a dead end post).

This past week we learned of a whole new approach, something called a Quantum Convolutional NN or QCNN (see PhysOrg Introducing QCNN, pre-print of Quantum CNNs, presentation deck on QCNNs, Nature QCNN paper paywall).

Some of you may not know that convolutional neural networks (ConvNets) are the latest in a long line of DL architectures focused on recognizing patterns and classification of data in sequence. DL ConvNets can be used to recognize speech, classify photo segments, analyze ticker tapes, etc.

But why quantum computing

First off, quantum computing (QC) is a new leading edge technology targeted to solving very hard (NP Complete, wikipedia) problems, like cracking Public Key encryption keys, solving the traveling salesperson problem and assembling an optimum Bitcoin block problem (see List of NP complete problems, wikipedia).

QC utilizes quantum mechanical properties of the universe to solve these problems without resorting to brute force searches, such as, going down every path in the traveling salesmen problem (see our QC programming and QC at our doorsteps posts).

At the moment, IBM, Google, Intel and others are all working on the QC and trying to scale it up, by increasing the number of Qubits (quantum bits) their systems support. The more qubits, the more quantum storage you have, and the more sophisticated NP complete problems one can solve. Current qubit counts include: 72 qubits for Google, 42 for Intel, and 50 for IBM. Apparently not all qubits are alike, and they don’t last very long, ~100 microseconds (see Timeline of QC, wikipedia).

What’s a QCNN?

What’s new is the use of quantum computing circuits to create ConvNets. Essentially the researchers have created a way to apply AI DL (ConvNet) techniques to quantum computing data (qubits).

Apparently there are QC [qubit] phases that need to be recognized and what better way to do that than use DL ConvNets. The only problem is that performing DL on QC data with today’s tools, would require reading out the phase into a digital (a pattern recognition problem), converting to digital data, and then processing it via CPU/GPU DL ConvNets, a classic chicken or egg problem. But with QCNNs, one has a DL ConvNet entirely implemented in QC.

DL ConvNets are typically optimized for a specific problem, varying layer counts, nodes/layer, node connectivity, etc. QCNNs match this and also come in various sizes. Above is a QCNN circuit, optimized to recognize the phase (joining?) of two sets of symmetrically-protected topology numbers (SPT, see pre-print article).

I won’t go into the QC technology used in any detail (as I barely understand it), but the researchers have come up with a way to map DL ConvNets into QC circuitry. Assuming this all works, one can then use QC to perform DL pattern recognition on qubit data.

~~~~

Comments?

Photo Credits: