Phonons, the next big technology underpinning integrated circuits

Often science and industry seems to advance from investigating phenomena that is a side effect of something else we want to try to accomplish. Optical fibers have been in use for over a decade now and have always had a problem called Brillouin scattering, where light photon’s interact with surrounding cladding and generate small vibrations or sound packets. This feedback causes light to disperse across the length of the fibre due to Brillouin scattering and create sound packets called phonons aka hyper sound.

As a recent article I read in Science Daily (Wired for sound a third wave emerges in integrated circuits) describes it, the first wave of ICs was based on electronics and was developed after WW II, the second wave was based on photons and came about largely at the start of this century, and now the third wave is emerging based on sound, phonons.

The research team at The University of Sydney, Nano Institute have published over 70 papers on Brillouin scattering and Prof. Benjamin J. Eggleton recently published a summary of their research in a Nature Photonics paper (Brillouin integrated photonics, behind paywall) but one can download the deck he presented as a summary of the paper, at an OSA Optoelectronics Technical Group webinar, last year..

It appears as if the Brillouin scattering technology is particularly useful for (microwave) photonics computing. In the Science Daily article, the professor says that the big advance here is in the control of light and sound over small distances. In the Sarticle, the Professor goes onto say that “Brillouin scattering of light helps us measure material properties, transform how light and sound move through materials, cool down small objects, measure space, time and inertia, and even transport optical information.”

I believe from a photonics IC perspective, transforming how light, other electromagnetic radiation, and sound move through materials is exciting. New technology for measuring material properties, cool down small objects, measure space, time and inertia are also of interest, but not as important in our view.

What’s a phonon

As discussed earlier, phonons are packets of sound vibration above 100mhz, that come about due to optical photons interaction with cladding. As photons bounce off the cladding they generate phonons within the material. Such bouncing creates optical and acoustical waves or phonons.

There’s been a lot of research on how to create “Stimulated Brillouin Scattering” (SBS) on silicon CMOS devices and still goes on, but lately they have found an effective hybrid (Silicon, SiO2, & As2S3) formula to generate SBS at will at chip scale.

What can you do with SBS phonons

Essentially SBS phonons can be used to measure, monitor, alter and increase the flow of electromagnetic (EM) waves in a substance or wave guide. I believe this can be light, microwaves, or just about anything on the EM spectrum. Nothing was mentioned about X-Rays, but it’s just another band of EM radiation.

With SBS, one can supply microwave filters, phase shifters and sources, recover carrier signal in coherent optical communications, store (or delay) light, create lasers and measure, at the sub-mm scale, optical material characteristics. Although the article discusses cooling down materials, I didn’t see anything in the deck describing this.

As SBS technologies are optical-acoustical devices, they are immune to EMI (electro- magnetic interference), EMPs (electro-magnetic pulses) and consume less energy than electronic circuits performing similar functions.

We’ve talked about photonic computing before (see our Photonic computing, seeing the light of day post). But to make photonics a real alternative to electronic computing they need a lot of optical management devices. We discussed a couple in the blog post mentioned above but SBS opens up another dimension of ways to control photonic data flow and processing.

Unclear why the research into SBS seems to be generated out of Australian Universities. However their research is being (at least partially) funded by a number of US DoD entities.

It’s unclear whether SBS will ultimately be one of those innovations in the long run, which enables a new generation of (photonic) IC technologies. But the team has shown that with SBS they can do a lot of useful work with optical/microwave transmission, storage and measurement.

It seems to me that to construct full photonic computing, we need an optical DRAM device. Storing light (with SBS) is a good first step, but any optical store/memory device needs to be randomly accessible, and store Kb, Mb or Gb of optical data, in chip size areas and persist (dynamic refreshing is ok).

The continued use of DRAM for this would make the devices susceptible to EMI, EMP and consume more energy. Maybe something could be done with an all optical 3DX that would suffice as a photonics memory device. Then it could be called Optical DC PM.

So, ICs with electronics, photonics and now phononics are in our future.

Comments?

Photo Credits:

Shedding light on all optical neural networks

Read a couple of articles in the past week or so on all optical neural networks (see All optical neural network (NN) closes performance gap with electronic NN and New design advances optical neural networks that compute at the speed of light using engineered matter).

All optical NN solutions operate faster and use less energy to inference than standard all electronic ones. However, in reality they aree more of a hybrid soulution as they depend on the use of standard ML DL to train a NN. They then use 3D printing and other lithographic processes to create a series diffraction layers of an all optical NN that matches the trained NN.

The latest paper (see: Class-specific Differential Detection in Diffractive Optical Neural Networks Improves Inference Accuracy) describes a significant advance beyond the original solution (see: All-Optical Machine Learning Using Diffractive Deep Neural Networks, Ozcan’s original paper).

How (all optical) Diffractive Deep NNs (DDNNs) work for inferencing

In the original Ozcan discussion, a DDNN consists of a coherent light source (laser), an image, a bunch of refractive and reflective diffraction layers and photo detectors. Each neural network node is represented by a point (pixel?) on a diffractive layer. Node to node connections are represented by lights path moving through the diffractive layer(s).

In Ozcan’s paper, the light flowing through the diffraction layer is modified and passed on to the next diffraction layer. This passing of the light through the diffraction layer is equivalent to the mathematical bias (neural network node FP multiplier) in the trained NN.

The previous challenge has been how to fabricate diffraction layers and took a lot of hand work. But with the advent of 3D printing and other lithographic techniques, nowadays, creating a diffraction layer is relatively easy to do.

In DDNN inferencing, one exposes (via a coherent beam of light) the first diffraction layer to the input image data, then that image is transformed into a different light pattern which is sent down to the next layer. At some point the last diffraction layer converts the light hitting it into classification patterns which is then be detected by photo detectors. Altenatively, the classification pattern can be sent down an all optical computational path (see our Photonic computing sees the light of day post and Photonic FPGAs on the horizon post) to perform some function.

In the original paper, they showed results of an DDNN for a completely connected, 5 layer NN, with 0.2M neurons and 8B connections in total. They also showed results from a sparsely connected, 5 layer NN ,with 0.45M neurons and <0.1B connections

Note, that there’s significant power advantages in exposing an image to a series of diffraction gratings and detecting the classification using a photo detector vs. an all electronic NN which takes an image, uses photo detectors to convert it into an electrical( pixel series) signal and then process it through NN layers performing FP arithmetic at layer node until one reaches the classification layer.

Furthermore, the DDNN operates at the speed of light. The all electronic network seems to operate at FP arithmetic speeds X number of layers. That is only if it could all done in parallel (with GPUs and 1000s of computational engines. If it can’t be done in parallel, one would need to add another factor X the number of nodes in each layer . Let’s just say this is much slower than the speed of light.

Improving DDNN accuracy

The team at UCLA and elsewhere took on the task to improve DDNN accuracy by using more of the optical technology and techniques available to them.

In the new approach they split the image optical data path to create a positive and negative classifier. And use a differential classifier engine as the last step to determine the image’s classification.

It turns out that the new DDNN performed much better than the original DDNN on standard MNIST, Fashion MNIST and another standard AI benchmark.

DDNN inferencing advantages, disadvantages and use cases

Besides the obvious power efficiencies and speed efficiencies of optical DDNN vs. electronic NNs for inferencing, there are a few other advantages:

  • All optical data paths are less noisy – In an electronic inferencing path, each transformation of an image to a pixel file will add some signal loss. In an all optical inferencing engine, this would be eliminated.
  • Smaller inferencing engine – In an electronic inferencing engine one needs CPUs, memory, GPUs, PCIe busses, networking and all the power and cooling to make it work. For an all optical DDNN, one needs a laser, diffraction layers and a set of photo detectors. Yes there’s some electronics involved but not nearly as much as an all electronic NN. And an all electronic NN with 0.5m nodes, and 5 layers with 0.1B connections would take a lot of memory and compute to support. Their DDNN to perform this task took up about 9 cm (3.6″) squared by ~3 to5 cm (1.2″-2.0″) deep.

But there’s some problems with the technology.

  • No re-training or training support – there’s almost no way to re-train the optical DDNN without re-fabricating the DDNN diffraction layers. I suppose additional layers could be added on top of or below the bottom layers, sort of like a corrective lens. Also, if perhaps there was some sort of way to (chemically) develop diffraction layers during training steps then it could provide an all optical DL data flow.
  • No support for non-optical classifications – there’s much more to ML DL NN functionality than optical classification. Perhaps if there were some way to transform non-optical data into optical images then DDNNs could have a broader applicability.

The technology could be very useful in any camera, lidar, sighting scope, telescope image and satellite image classification activities. It could also potentially be used in a heads up displays to identify items of interest in the optical field.

It would also seem easy to adapt DDNN technology to classify analog sensor data as well. It might also lend itself to be used in space, at depth and other extreme environments where an all electronic NN gear might not survive for very long.

Comments?

Photo Credit(s):

Figure 1 from All-Optical Machine Learning Using Diffractive Deep Neural Networks

Figure 2 from All-Optical Machine Learning Using Diffractive Deep Neural Networks

Figure 2 from Class-specific Differential Detection in Diffractive Optical Neural Networks Improves Inference Accuracy

Figure 3 from Class-specific Differential Detection in Diffractive Optical Neural Networks Improves Inference Accuracy

Where should IoT data be processed – part 1

I was at FlashMemorySummit 2019 (FMS2019) this week and there was a lot of talk about computational storage (see our GBoS podcast with Scott Shadley, NGD Systems). There was also a lot of discussion about IoT and the need for data processing done at the edge (or in near-edge computing centers/edge clouds).

At the show, I was talking with Tom Leyden of Excelero and he mentioned there was a real need for some insight on how to determine where IoT data should be processed.

For our discussion let’s assume a multi-layered IoT architecture, with 1000s of sensors at the edge, 100s of near-edge processing/multiplexing stations, and 1 to 3 core data center or cloud regions. Data comes in from the sensors, is sent to near-edge processing/multiplexing and then to the core data center/cloud.

Data size

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

When deciding where to process data one key aspect is the size of the data. Tin GB or TB but given today’s world, can be PB as well. This lone parameter has multiple impacts and can affect many other considerations, such as the cost and time to transfer the data, cost of data storage, amount of time to process the data, etc. All of these sub-factors include the size of the data to be processed.

Data size can be the largest single determinant of where to process the data. If we are talking about GB of data, it could probably be processed anywhere from the sensor edge, to near-edge station, to core. But if we are talking about TB the processing requirements and time go up substantially and are unlikely to be available at the sensor edge, and may not be available at the near-edge station. And PB take this up to a whole other level and may require processing only at the core due to the infrastructure requirements.

Processing criticality

Human or machine safety may depend on quick processing of sensor data, e. g. in a self-driving car or a factory floor, flood guages, etc.. In these cases, some amount of data (sufficient to insure human/machinge safety) needs to be done at the lowest point in the hierarchy, with the processing power to perform this activity.

This could be in the self-driving car or factory automation that controls a mechanism. Similar situations would probably apply for any robots and auto pilots. Anywhere some IoT sensor array was used to control an entity, that could jeopardize the life of human(s) or the safety of machines would need to do safety level processing at the lowest level in the hierarchy.

If processing doesn’t involve safety, then it could potentially be done at the near-edge stations or at the core. .

Processing time and infrastructure requirements

Although we talked about this in data size above, infrastructure requirements must also play a part in where data is processed. Yes sensors are getting more intelligent and the same goes for near-edge stations. But if you’re processing the data multiple times, say for deep learning, it’s probably better to do this where there’s a bunch of GPUs and some way of keeping the data pipeline running efficiently. The same applies to any data analytics that distributes workloads and data across a gaggle of CPU cores, storage devices, network nodes, etc.

There’s also an efficiency component to this. Computational storage is all about how some workloads can better be accomplished at the storage layer. But the concept applies throughout the hierarchy. Given the infrastructure requirements to process the data, there’s probably one place where it makes the most sense to do this. If it takes a 100 CPU cores to process the data in a timely fashion, it’s probably not going to be done at the sensor level.

Data information funnel

We make the assumption that raw data comes in through sensors, and more processed data is sent to higher layers. This would mean at a minimum, some sort of data compression/compaction would need to be done at each layer below the core.

We were at a conference a while back where they talked about updating deep learning neural networks. It’s possible that each near-edge station could perform a mini-deep learning training cycle and share their learning with the core periodicals, which could then send this information back down to the lowest level to be used, (see our Swarm Intelligence @ #HPEDiscover post).

All this means that there’s a minimal level of processing of the data that needs to go on throughout the hierarchy between access point connections.

Pipe availability

binary data flow

The availability of a networking access point may also have some bearing on where data is processed. For example, a self driving car could generate TB of data a day, but access to a high speed, inexpensive data pipe to send that data may be limited to a service bay and/or a garage connection.

So some processing may need to be done between access point connections. This will need to take place at lower levels. That way, there would be no need to send the data while the car is out on the road but rather it could be sent whenever it’s attached to an access point.

Compliance/archive requirements

Any sensor data probably needs to be stored for a long time and as such will need access to a long term archive. Depending on the extent of this data, it may help dictate where processing is done. That is, if all the raw data needs to be held, then maybe the processing of that data can be deferred until it’s already at the core and on it’s way to archive.

However, any safety oriented data processing needs to be done at the lowest level and may need to be reprocessed higher up in the hierachy. This would be done to insure proper safety decisions were made. And needless the say all this data would need to be held.

~~~~

I started this post with 40 or more factors but that was overkill. In the above, I tried to summarize the 6 critical factors which I would use to determine where IoT data should be processed.

My intent is in a part 2 to this post to work through some examples. If there’s anyone example that you feel may be instructive, please let me know.

Also, if there’s other factors that you would use to determine where to process IoT data let me know.

Smart proteins, the dawn of new intra-cell therapeutics

Saw an article the other day about Smart Cells (Incredible artificial proteins opens up potential for smart cells). There’s another paper that goes into a bit more depth than the original article (‘Limitless potential’ of artificial proteins ushers in new era of ‘smart’ cell therapies).

The two freely available articles talk about two papers in Nature (De novo design of bioactive protein switches & Modular and tunable biological feedback using a de novo protein switch, behind a paywall) that explain the technology in more detail.

Smart proteins act as switches

What the researchers have created is an artificial protein that builds a cage around some bioactive (protein based) mechanism that can be unlocked by another protein while residing in a cell. This new protein is called LOCKR (Latcheable Orthogonal Cage-Key pRotein). LOCKR proteins act as switch activated therapeutics within a cell.

In the picture above the blue coils are the cage proteins and the yellow coil is the bioactive device (protein). Bioactive devices can be designed that degrade other proteins, can modify biological processes within the cell, initiate the cells self-destruct mechanism, etc, just about anything a protein can do within a cell.

In the second Nature paper, they discuss one example of a LOCKR protein, called degron-LOCKR which once inside a cell is used to degrade (destroy) a specific protein. The degron-LOCKR protein only activates when the other protein is active (found) within the cell and it operates only as long as that protein is in sufficient concentration.

The nice thing about the degron-LOCKR protein is that is completely self-regulating. It only operates when the protein to be degraded exists in the cell. That protein acts as the switch in this LOCKR. Until then it remains benign, waiting for a time when the targeted protein starts to be present in the cell.

How LOCKR works

In the picture above the cage is in shown by the grey structure, the bio-active therapy is shown by the yellow structure, and the protein key is shown by the black structure. When the key is introduced into the LOCKR protein, the yellow structure is unfolded (enabled) and can then impact whatever intra-cellular process/protein, it’s been designed to impact.

One key attribute to LOCKR is that the bioactive device within the cage, can be just about anything that works inside the cell. It could be used to create more proteins, less proteins, disable proteins, and perhaps enhance the activity of other proteins.

And, both the LOCKR and the bioactive device can be designed from scratch and fabricated outside or inside the cell. Of course the protein key is the other aspect of the LOCKR mechanism that is fully determined by the designer of the LOCKR protein.

Sort of reminds me of the transistor. Both are essentially switches. For transistors, as long as voltage is applied, it will allow current to flow across the switch. LOCKR does something very similar, but uses a key protein and a bioactive protein that only allows the bioactive protein to activate when the key protein is present.

We’ve talked extensively in the past about using DNA/cells as rudimentary computers and storage, but this takes that technology to a whole other level, (please see our DNA computing series here & here as well as our posts on DNA as storage here & here ). And all that work was done without LOCKR. With LOCKR much of these systems would be even easier to construct and design.

The articles go on to say that LOCKR unleashes the dawn of a new age of intra-cell therapeutics with fine grained control over when and where a particular bio-active therapy is activated within the cell

Some questions

Some of these may be answered in the Nature papers (behind paywall), so sorry in advance, if you have access to those.

How the LOCKR protein(s) are introduced into cells, was not discussed in the freely available articles. We presume that DNA designed to create the LOCKR protein could be injected into cells via a virus, added to the cells DNA via CRISPR, or the LOCKR protein could just be injected into the cell.

Moreover, how LOCKR proteins are scaled up within the cell to be more or less active and scaled up throughout an organ to “fix” multiple cells is yet another question.

Adding artificial DNA or LOCKR proteins to cells may be easy in the lab, but putting such therapy into medical practice will take much time and effort. And any side effects of introducing artificial DNA or LOCKR proteins (not found in nature) to cells will need to be investigated. And finally how such protein technology impacts germ lines would need to be fully understood.

But the fact that the therapeutic process is only active when unlocked by another key protein makes for an intriguing possibility. You would need both the LOCKR protein and the key (unlock-er) protein to be present in a cell for the therapy to be active.

But they present one example, the degron-LOCKR, where the key seems to be a naturally active protein in a cell that needs to be degraded, not a different, artificial protein introduced into the cell. So the key doesn’t have to be an artificial protein and probably would not be for most LOCKR designed proteins.

~~~~

Not a bad start for a new therapy. It has much potential, especially if it can be scaled easily and targeted specifically. Both of which seem doable (given our limited understanding of biological processes).

Comments?

Picture Credit(s): From De novo design of bioactive protein switches article

From Limitless potential… article

Clouds an existential threat – part 2

Recall that in part 1, we discussed most of the threats posed by clouds to both hardware and software IT vendors. In that post we talked about some of the more common ways that vendors are trying to head off this threat (for now).

In this post we want to talk about some uncommon ways to deal with the coming cloud apocalypse.

But first just to put the cloud threat in perspective, the IT TAM is estimated, by one major consulting firm, to be a ~$3.8T in 2019 with a growth rate of 3.7% Y/Y. The same number for public cloud spending, is ~$214B in 2019, growing by 17.5% Y/Y. If both growth rates continue (a BIG if), public cloud services spend will constitute all (~98.7%) of IT TAM in ~24 years from now. No nobody would predict those growth rates will continue but it’s pretty evident the growth trends are going the wrong way for (non-public cloud) IT vendors.

There are probably an infinite number of ways to deal with the cloud. But outside of the common ones we discussed in part 1, only a dozen or so seem feasible to me and even less are fairly viable for present IT vendors.

  • Move to the edge and IoT.
  • Make data center as easy and cheap to use as the cloud
  • Focus on low-latency, high data throughput, and high performing work and applications
  • Move 100% into services
  • Move into robotics

The edge has legs

Probably the first one we should point out would be to start selling hardware and software to support the edge. Speaking in financial terms, the IoT/Edge market is estimated to be $754B in 2019, and growing by over a 15.4% CAGR ).

So we are talking about serious money. At the moment the edge is a very diverse environment from cameras, sensors and moveable devices. And everybody seems to be in the act, big industrial firms, small startups and everyone in between. Given this diversity it’s hard to see that IT vendors could make a decent return here. But given its great diversity, one could say it’s ripe for consolidation.

And the edge could use some reference architectures where there are devices at the extreme edge, concentrators at the edge, more higher concentrators at nodes and more at the core, etc. So there’s a look and feel to it that seems like Ro/Bo – central core hub and spoke architectures, only on steroids with leaf proliferation that can’t be stopped. And all that data coming in has to be classified, acted upon and understood.

There are plenty of other big industrial suppliers in this IoT/edge field but none seem to have the IT end of the market that Hitachi Vantara can claim to. Some sort of combination of a large IT vendor and a large industrial firm could potentially do the same

However, Hitachi Vantara seems to be focusing on the software side of the edge. This may be an artifact of Hitachi family of companies dynamics. But it seems to be leaving some potential sales on the table.

Hitachi Vantara has the advantage of being into industrial technology in a big way so the products they create operate in factories, rail yards, ship yards and other industrial sites around the world already. So, adding IoT and edge capabilities to their portfolio is a natural extension of this expertise.

There are a few vendors going into the Edge/IoT in a small way, but no one vendor personifies this approach more than Hitachi Vantara. The Hitachi family of companies has a long and varied history in OT (operational technology) or industrial technology. And over the last many years, HDS and now Hitachi Vantara, have been pivoting their organization to focus more on IoT and edge solutions and seem to have made IOT, OT and the edge, a central part of their overall strategy.

So there’s plenty of money to be made with IoT/Edge hardware and software, one just has to go after it in a big way and there’s lots of competition. But all the competition seems to be on the same playing field (unlike the public cloud playing field).

Getting to “data center as a cloud”

There are a number of reasons why customers migrate work to the cloud, ease of use, ease of storage, ease of scale, access to myriad applications, access to multi-regional data centers, CAPex financial model, to name just a few.

There’s nothing that says much of this couldn’t be provided at the data center. It’s mostly just a lot of open source software and a lot of common hardware. IT vendors can do this sort of work if they put their vast resources to go after it.

From the pure software side, there are a couple of companies trying to do this namely VMware and Nutanix but (IBM) RedHat, (Dell) Pivotal, HPE Simplivity and others are also going after this approach.

Hardware wise CI and HCI, seem to be rudimentary steps towards common hardware that’s easy to deploy, operate and support. But these baby steps aren’t enough. And delivery to deployment in weeks is never going to get them there. If Amazon can deliver books, mattresses, bicycles, etc in a couple of days. IT vendors should be able to do the same with some select set of common hardware and have it automatically deployable in seconds to minutes once powered on.

And operating these systems has to be drastically simplified. On any public cloud there’s really no tuning required, almost minimal configuration, and then it’s just load your data and go. Yes there’s a market place to select, (virtual) hardware, (virtual) storage hardware, (virtual) networking hardware, (virtual server) O/S and (virtual?) open source applications.

Yes there’s a lots of software behind all that virtualization. And it’s fundamentally different than today’s virtualized systems. It’s made to operate only on commodity hardware and only with open source software.

The CAPex financial model is less of a problem. Today. I find many vendors are offering their hardware (and some software) on a CAPex, pay as you go model. More of this needs to be made available but the IT vendors see this, and are already aggressively moving in this direction.

The clouds are not standing still what with Azure Stack, AWS and GCP all starting to provideversions of their stack on prem in the enterprise. This looks to be a strategic battleground between the clouds and IT vendors.

Making everything IT can do in the cloud available in the data center, with common hardware and software and with the speed and ease of deployment, operations and support (maintenance) should be on every IT vendors to do list.

Unfortunately, this is not going to stop the public cloud completely, but it has the potential to slow the growth rate. But time is short, momentum has moved to the public cloud and I don’t (yet) see the urgency of the IT vendors to make this transition happen today.

Focus on low-latency, high data throughput and high performance work

This is somewhat unfair as all the IT vendors are already involved in these markets in a big way. But, there are some trends here, that indicate this low-latency market will be even more important over time.

For example, more and more of commercial IT is starting to take advantage of big data and AI to profit from all their data. And big science is starting to migrate to IT, where massive data flows and data analysis tools are becoming important to the data center. If anything, the emergence of IoT and the edge will increase data flows that need to be analyzed, understood, and ultimately dealt with.

DNA genomics may be relegated to big pharma/medical but 3D visualization is becoming so mainstream that I can do it on my desktop. These sorts of things were relegated to HPC/big science just a decade or so ago. What tools exist in HPC today that the IT data center of the future will deam a necessary part of their application workload.

Is this a sizable TAM, probably not today. In all honesty it’s buried somewhere in the IT TAM above. But it can be a growing niche, where IT vendors can stake a defensive position and the cloud may have a tough time dislodging.

I say the cloud “may have trouble dislodging” because nothing says that the entire data flow/work flow couldn’t migrate to the cloud, if the responsiveness was available there. But, if anything (guaranteed) responsiveness is one of the few achilles heels of the public cloud. Security may be the other one.

We see IBM, Intel, and a few others taking this space seriously. But all IT vendors need to see where they can do better here.

Focus on services

This not really out-of-box thinking. Some (old) IT vendors have been moving into services for over 50 years now others are just seeing there’s money to be made here. Just about every IT vendor has deployment & support services. most hardware have break-fix services.

But standalone IT services are more specialized and in the coming cloud apocalypse, services will revolve around implementing cloud applications and functionality or migrating work from the cloud or (rarely in the future) back to on prem.

TAM for services is buried in the total IT spend but industry analysts estimate that in 2019 total worldwide TAM for IT services will be about $1.0 in 2019 and growing by 2.6% CAGR.

So services are already a significant portion of IT spend today. And will probably not be impacted by the move to the cloud. I’d say that because implementing applications and services will still exist as long as the cloud exists. Yes it may get simpler (better frameworks, containerization, systemization), but it won’t ever go away completely.

Robots, the endgame

Ok laugh now. I understand this is a big ask to think that Robot spending could supplement and maybe someday surpass IT spending. But we all have to think long term. What is a self driving car but a robotic data center on wheels, generating TB of data every day it’s driven.

Robots over the next century will invade every space, become ever present and ever necessary to modern world functioning . They will have sophisticated onboard computing, motors, servos, sensors and on board and backend processing requirements. The real low-latency workload of the future will be in the (computing) minds of robots.

Even if the data center moves entirely to the cloud, all robotic computation will never reside there because A) it’s too real time and B) it needs to operate well even disconnected from the Internet.

Is all this going to happen in the next 10 or 20 years, maybe not but 30 to 50 years out this world will have a multitude of robots operating within it. .

Who’s going to develop, manufacture, support and sustain these mobile computing data centers on wheels, legs, slithering and flying bodies?

I would say IT vendors of today are uniquely positioned to dominate this market. Here to the industry is very fragmented today. There are a few industrial robotic companies and just about every major auto manufacturer is going after self driving cars. And there are many bit players today. So it’s ripe for disruption and consolidation. .

Yet, none of the major IT vendors seem to be going after this. Ok Amazon (hardware & software) and Microsoft (software) have done work in this arena. If anything this should tell IT vendors that they need to start working here as well.

But alas, none have taken up the mantle. In the mean time robot startups are biting the dust left and right, trying to gain market traction.

~~~~

That seems to be about it for the major viable out of the box approaches to the public cloud threat. I have a few other ideas but none seem as useful as the above.

Let me know what you think.

Picture credit(s):

Are neuromorphic chips a dead end?

Read a recent article about Intel’s Pohoiki Beach neuromorphic system and their Loihi chips, that has scaled up to 8M neurons in IEEE Spectrum (Intel’s neuromorphic system hits 8 M neurons). In the last month or so I wrote up about a two startups one of which seemed (?) to be working on a neuromorphic chip development (see my Photonics computing sees the light of day post).

I’ve been writing about neuromorphic chips since 2011, 8 long years (see IBM SyNAPSE chip post from 2011 or search my site for “neuromorphic”) and none have been successfully reached the market. The problems with neurmorphic architectures have always been twofold, scaling AND software.

Scaling up neurons

The human brain has ~86B neurons (see wikipedia human brain article). So, 8 million neuromorphic neurons is great, but it’s about 10K X too few. And that doesn’t count the connections between neurons. Some human neurons have over 1000 connections between nerve cells (can’t seem to find this reference anymore?).

Wikimedia commons (481px-Chemical_synapse_schema_cropped)
Wikimedia commons (481px-Chemical_synapse_schema_cropped)

To get from a single chip with 125K neurons to their 8M neuron system, Intel took 64 chips and put them on a couple of boards. To scale that to 86B or so would take ~690, 000 of their neuromorphic chips. Now, no one can say if there’s not some level below 85B neuromorphic neurons, that could support a useful AI solution, but the scaling problem still exists.

Then there’s the synapse connections between neuromorphic neurons problem. The article says that Loihi chips are connected in a heirarchical routing network, which implies to me that there are switches and master switches (and maybe a really big master switch) in their 8M neuromorphic neuron system. Adding another 4 orders of magnitude more neuromorphic neurons to this may be impossible or at least may require another 4 sets of progressively larger switches to be added to their interconnect network. There’s a question of how many hops and the resultant latency in connecting two neuromorphic neurons together but that seems to be the least of the problem with neuromorphic architectures.

Missing software abstractions

The first time I heard about neuromorphic chips I asked what the software looks like and the only thing I heard was that it was complex and not very user friendly and they didn’t want to talk about it.

I keep asking about software for neuromorphic chips and still haven’t gotten a decent answer. So, what’s the problem. In today’s day and age, software is easy to do, relatively inexpensive to produce and can range from spaghetti code to a hierarchical masterpieces, so there’s plenty of room to innovate here.

But whenever I talk to engineers about what the software looks like, it almost seems like a software version of an early plug board unit-record computer (essentially card sorters). Only instead of wires, you have software neuromorphic network connections and instead of electro-magnetic devices, one has software spiking neuromorphic neuron hardware.

The way we left plugboards behind was by building up hardware abstractions such as adders, shifters, multipliers, etc. and moving away from punch cards as a storage medium. Somewhere along this transition, we created programing languages like (macro) Assemblers, COBOL, FORTRAN, LISP, etc. It’s the software languages that brought computing out of the labs and into the market.

It’s been at least 8 years now, and yet, no-one has built a spiking neuromorphic computer language yet. Why not?

I think the problem is there’s no level of abstraction above a neuron. Where’s the aritmetic logic unit (ALU) or register equivalents in neuromorphic computers? They don’t exist as far as I can see.

Until we can come up with some higher levels of abstraction, coding neuromorphic chips is going to be an engineering problem not a commercial endeavor.

But neuromorphism has advantages

The IEEE article states a couple of advantages for neuromorphic computing: less energy to perform inferencing (and possibly training) and the ability to train on incremental data rather than having to train across whole datasets again.

Yes these are great, but there’s a gaggle of startups (e.g., see New GraphCore GC2 chip…, AI processing at the edge, TPU and HW-SW innovation) going after the energy problem in AI DL using Von Neumann architectures.

And the incremental training issue doesn’t seem any easier when you have ~80B neurons, with an occasional 1000s of connections between them to adjust correctly. From my perspective, its training advantage seems illusory at best.

Another advantage of neuromorphism is that it simulates the real analog logic of a human brain. Again, that’s great but a brain takes ~22 years to train (college level). Maybe because neuromorphic chips are electronic perhaps training can be done 100 times faster. But there’s still the software issue

~~~~

I hate to be the bearer of bad news. There’s been some major R&D spend on neuromorphism and it continues today with no abatement.

I just think we’d all be better served figuring out how to program the beast than on –spending more to develop more chip hardware..

This is hard for me to say, as I have always been a proponent of hardware innovation. It’s just that neuromorphic software tools don’t exist yet. And I’m afraid, I don’t see any easy way forward to make any progress on this.

Comments?.

Picture credit(s):

Improving floating point

Read a post this week in Reddit pointing to an article that was from The Next Platform (New approach could sink floating point computation). It was all about changing IEEE floating point format to something better called posits, which was designed by noted computer architect, John Gustafson, et al, (see their paper Beating floating point at its own game: Posit arithmetic, for more info).

The problems with standard floating point have been known since they were first defined, in 1985 by the IEEE. As you may recall, an IEEE 754 floating point number has three parts a sign, an exponent and a mantissa (fraction or significand part). Both the exponent and mantissa can be negative.

IEEE defined floating point numbers

The IEEE 754 standard defines the following formats (see Floating point Floating -point arithmetic, for more info)

  • Half precision floating point, (added in 2008), has 1 sign bit (for the significand or mantissa), 5 exponent bits (indicating 2**-62 to 2**+64) and 10 significand bits for a total of 16 bits.
  • Single precision floating point, has 1 sign bit, 8 exponent bits (indicating 2**-126 to 2**+128) and 23 significand bits for a total of32 bits.
  • Double precision floating point, has 1 sign bit, 11 exponent bits (2**-1022 to 2**+1024) and 52 significand bits.
  • Quadrouple precision floating point, has 1 sign bit, 15 exponent bits (2**-16,382 to 2**+16,384) and 112 significand bits.

I believe Half precision was introduced to help speed up AI deep learning training and inferencing.

Some problems with the IEEE standard include, it supports -0 and +0 which have different representations and -∞ and +∞ as well as can be used to represent a number of unique, Not-a-Numbers or NaNs which are illegal floating point numbers. So when performing IEEE standard floating point arithmetic, one needs to check to see if a result is a NaN which would make it an illegal result, and must be wary when comparing numbers such as -0, +0 and -∞ , +∞. because, sigh, they are not equal.

Posits to the rescue

It’s all a bit technical (read the paper to find out) but posits don’t support -0 and +0, just 0 and there’s no -∞ or +∞ in posits either, just ∞. Posits also allow for a variable number of exponent bits (which are encoded into Regime scale factor bits [whose value is determined by a useed factor] and Exponent scale factor bits) which means that the number of significand bits can also vary.

So, with a 32 bit, single precision Posit, the number range represented can be quite a bit larger than single precision floating point. Indeed, with the approach put forward by Gustafson, a single 32 bit posit has more numeric range than a single precision IEEE 754 float and about as 1/2 as much range as double precision IEEE floating point number but only uses 32 bits.

Presently, there’s no commercial hardware implementations of posits, but there’s a lot of interest. Mostly because, the same number of bits can represent a lot more numeric range than equivalently sized IEEE 754 floats. And for HPC environments, AI deep learning applications, scientific computing, etc. having more numeric range (or precision), in less space, means they can jam more data in the same storage, transfer more data over the same networking bandwidth and save more numbers in limited amounts of DRAM.

Although, commercial implementations do not exist, there’s been some FPGA simulations of posit floating point arithmetic. Those simulations have shown it to be more energy efficient than IEEE 754 floating point arithmetic for the same number of bits. So, you need to add better energy efficiency to the advantages of posit arithmetic.

Is it any wonder that HPC/big science (weather prediction, Square Kilometer Array, energy simulations, etc.) and many AI hardware accelerator chip designers are examining posits as a potential way to boost precision, reduce storage/memory footprint and reduce energy consumption.

~~~~

Yet, standards have a way of persisting. Just look at how long the QWERTY keyboard has lasted. It was originally designed in the 1870’s to slow down typing and reduce jamming, when typewriters were mechanical devices. But ever since 1934, when the DVORAK keyboard was patented, there’s been much better layouts for keyboards. And there’s no arguing that the DVORAK keyboard is better for typing on non-mechanical typewriters. Yet today, I know of no computer vendor that ships DVORAK labeled keyboards. Once a standard becomes set, it’s very hard to dislodge.

Comments?

Photo Credit(s):

Clouds, an existential threat to vendors – part 1

Was at a conference last month where there was discussion of the “cloudless” future. This is so wrong, clouds are a threat to every IT hardware and software vendor out there and it’s not going away

The hardware side is easy to see.

Clouds threat to IT hardware vendors

On the storage side, the big hyperscalers have adopted software defined storage from the git go. Smaller ones are migrating that way as well and it’s even impacting data centers as the big virtualization software vendors release more and more functionality in SwDefStorage

And on the networking side, the clouds were an early adopter of Openflow, software defined networking. OpenFlow gear still requires specialized hardware but mostly it’s just a server with PCIe accelerator cards that perform high speed switching. Ditto the prior paragraph here as the virtualization vendors are also moving their networking to SwDefNetworking.

Luckily for servers there’s no such thing as a SwDefServer, yet. But server offerings are under just as big a threat from the cloud. Hyper-scalars are sophisticated enough to design their own server hardware and have it manufactured to spec. The smaller ones can make use of whitebox servers. Both of them, at the volumes they consume servers, can force a race to the bottom on pricing.

So server vendors are being relegated to the data center for the most part. And as data center servers become more powerful, virtualized environments need less of them.

The threat to IT software vendors

Make no mistake about it, software is under just as much threat as hardware. AWS and Oracle was probably the best example of how this works. Oracle was once a profitable niche market on AWS. Today, Oracle is not even available on AWS marketplace anymore.

This sort of dynamic can happen to any solution where acceptable open source alternatives exist. With the cloud’s sophistication and volumes they can easily take the sting out of using open source by providing ease of deployment, use and maintenance along with performance scalability. That makes running open source on clouds as easy as any packaged solution.

Internet Splat Map by jurvetson (cc) (from flickr)
Internet Splat Map by jurvetson (cc) (from flickr)

Albeit, maybe the cloud may not offer the support or hand-holding one obtains with packaged software. But open source can be very responsive to bugs/security exposures. Cloud providers can take the time to make their open source solutions bullet proof. And with 1000s to 10,000s of users, running them at scale, it should be easy enough to find any high profile bugs.

Even all those software vendors that make software that executes only on the cloud, to make it run better, more secure or to add some unique functionally are at risk. All these vendors ultimately will suffer by “death from marketplace success“. As they become successful and cloud vendors know inherently how successful they are, they become more interesting to the cloud. Over time more successful solutions will attract cloud provider functionally-equivalent, open source alternatives, that will push them out of the clouds marketplace.

Dealing with the threat to hardware vendors

Hardware vendors have four grand strategies to address the cloud threat.

  1. Stick head in sand, hope it goes away (or at least takes a long time to kill them off). There are still some major vendors with this mindset. Yes, slowly but surely they are coming around to see the light but they think they have a long enough runway to hold on until something better comes along.
  2. Co-opt the cloud by providing unique, hardware capabilities in their cloud environment. There are a few hardware vendors that have adopted this strategy. This buys them more time as they can depend on current data center revenues and over time augment this with cloud revenues.
  3. Join the race to the bottom to become a supplier to clouds. Most hardware vendors started out in a highly competitive environment, but over time they have lost their way (found a higher profitability niche). But lurking in their past somewhere, there’s a competitiveness streak that’s dying to come out. The race to the bottom may never be as profitable as data centers but there’s significant revenue to be had here.
  4. Co-opt the cloud by providing services that span multiple clouds. Not exactly creating a hybrid cloud but rather providing a multi-cloud hardware service. Hardware functionality that can be accessed from multiple clouds can enjoy some advantages of the cloud but at the same time generate data center like revenues..

I may be missing some grand hardware vendor strategies but as I’ve talked with hardware vendors over time these seem to be the main ones moving ahead.

I’ve tried a couple of times to talk to vendors in the #1 mindset above about the futility of their approach. Mostly, I get ignored or at best politely brushed off as being alarmist. Their main hope is that the data center continues on in the present environment and that they can retain their dominance there.

Maybe they have a point. The 1960s mainframe environment still exists today. And IBM still remains dominant there, and generates profits there. But it just doesn’t matter that much to IT anymore. IT has moved on. .

Richard (Dick) Nafzger with Apollo data tape by Goddard Photo and Video (cc) (from flickr)
Richard (Dick) Nafzger with Apollo data tape by Goddard Photo and Video (cc) (from flickr)

Something similar will happen to IT’s data center. Yes it will still exist forever, and perhaps some vendors can continue to profit there.

But the vast majority of IT workloads will be moving to the cloud over time, relegating this to a smaller (proportionally) niche market. They’ve been saying tape is dead since 1967, but it’s still alive, it’s just moved from being a large market to a smaller one (proportionally).

The #2 mindset vendors have a clearer view of wha’s happening with the cloud. They are moving select hardware functionality out to the cloud as soon as they are able. Some are even placing their hardware in cloud provider availability zones (data centers) to support this. We all hope they enjoy lasting success doing this.

But ultimately they to, shall suffer the same fate as software vendors above, due to the cloud’s death by marketplace success. The more successful they become, the higher the likelihood that the cloud providers will go after them with their own functionally-equivalent, software defined solution.

I’m not privy to the contracts between hardware vendors and cloud providers bit perhaps this later transition, to outright competition, can be forestalled enough to make the cloud providers reluctant to compete with them. But hardware success can only lead to more cloud interest and no contract can protect against every contingency.

Those vendors adopting the #3 mindset have to return to their competitiveness roots. Doing this will never be as profitable as today’s data center. So the transition will be painful, but they need to do this soon, while they still have some profits coming from data center sales. The sooner they can deploy these $s to fix supply chains, manufacturing quality/production, drastically slim down marketing and sales, the faster they can start supplying the clouds with appropriate hardware. Profitability will suffer early on but it may never fully recover.

The #4 mindset applies equally well to software vendors as well as hardware vendors but the hardware group seems to be doing this already. Many storage vendors have multi-cloud solutions with hardware positioned in cloud-adjacent facilities that can be accessed from multiple clouds. Such services have to be consumable like any cloud service. But once in place they have a unique value proposition, the ability to move work and data from one cloud to another.

But the only thing stopping cloud providers doing something similar is that they don’t want to help any current user to use a different cloud. Again, depending on how successful this multi-cloud approach becomes, there’s nothing prohibiting the cloud providers from providing similar functionality.

Dealing with the threat to software vendors

Software vendors see 4 grand strategies to deal with the cloud threat:

  1. If you can’t beat them, join them, and create their own cloud. IBM exemplifies this best but one can see this with Microsoft, Oracle, SAP and others. If they can create their own cloud, they can start to compete with cloud providers on an equal footing. Yes they will be smaller but they can enjoy many of the same benefits of bigger clouds, just not as much. .
  2. Offer their software services/stack on the cloud providers. This is similar to the hardware vendors #2 mindset. Yet this has suffered from death by marketplace success since the inception.
  3. Co-opt the cloud by providing services that fuse the data center and the cloud environments. Thus creating hybrid cloud solutions that span the data center-cloud environment which seem to have a longer runway. But this lasts only as long as the data center is a significant market.
  4. Co-opt the cloud by providing services that span cloud provider vendors. Multi-cloud solutions are more apparent for hardware, but nothing prohibits a software vendor from offering services that spans clouds.

I may be missing a few grand strategies here but these seem to be the major ones software vendors are using to deal with the cloud. And just like hardware vendors above, much of the success of these strategies (at least #2,3 &4) depends on flying under the radar of cloud providers. Limiting your success may give you some time to eek out a decent revenue/profitability stream, while the cloud provider kills off the more successful solutions ahead of you. But you’re all living on borrowed time.

The most interesting one is #1. Yes economies of scale will matter, which will make their long term viability a concern. But at least you can be on the same playing field. Most of these companies have sizable treasure chests and if invest serious money on their own clouds, they may have a chance to survive.

Cloud providers are taking their time

The other thing that’s prolonging the data center and correspondingly vendors existence is cloud providers expenses. With all their hardware volumes, use of white box or own designed hardware and open source software, does it make any sense that IT could provide matching services in data centers by themselves.

But something is chewing up their revenues, Maybe it’s marketing, customer acquisition, software/hardware development or support expenses. I tend to think it’s trying to keep pace with customer growth. They end up having to anticipate this growth ahead of time and position hardware, software and services before the customers exist to use them.

I don’t think there’s anything more mysterious to their lack of profitability than that. They all want all the customers they can get. They are have significant growth and they are all charging a premium for their service. However, I may be wrong.

But how long can such hyper-growth last. At some point, as more and more IT organizations move to the cloud this growth will slow, prices will start to come down and it will set off a vicious cycle, more cloud success brings more volumes, less overhead and should lower prices which brings more cloud success.

More cloud success brings less volumes for hardware and software vendors, more overhead and ultimately higher prices.

None of the above solutions seem that attractive to hardware or software vendors but I see only a few ways forward for all of them.

In part 2, I’ll discuss some out of the box strategies that move beyond the data center and the cloud that may be just the way forward for hardware and software vendors need to take the cloud on.

Comments?