Is hardware innovation accelerating – hardware vs. software innovation (round 6)

There’s something happening to the IT industry, that maybe has not happened in a couple of decades or so but hardware innovation is back. We’ve been covering bits and pieces of it in our hardware vs software innovation series (see Open source ASiCs – HW vs. SW innovation [round 5] post).

But first please take our new poll:

Hardware innovation never really went away, Intel, AMD, Apple and others had always worked on new compute chips. DRAM and NAND also have taken giant leaps over the last two decades. These were all major hardware suppliers. But special purpose chips, non CPU compute engines, and hardware accelerators had been relegated to the dustbins of history as the CPU giants kept assimilating their functionality into the next round of CPU chips.

And then something happened. It kind of made sense for GPUs to be their own electronics as these were SIMD architectures intrinsically different than SISD, standard von Neumann X86 and ARM CPUs architectures

But for some reason it didn’t stop there. We first started seeing some inklings of new hardware innovation in the AI space with a number of special purpose DL NN accelerators coming online over the last 5 years or so (see Google TPU, SC20-Cerebras, GraphCore GC2 IPU chip, AI at the Edge Mythic and Syntiants IPU chips, and neuromorphic chips from BrainChip, Intel, IBM , others). Again, one could look at these as taking the SIMD model of GPUs into a slightly different direction. It’s probably one reason that GPUs were so useful for AI-ML-DL but further accelerations were now possible.

But it hasn’t stopped there either. In the last year or so we have seen SPUs (Nebulon Storage), DPUs (Fungible, NVIDIA Networking, others), and computational storage (NGD Systems, ScaleFlux Storage, others) all come online and become available to the enterprise. And most of these are for more normal workload environments, i.e., not AI-ML-DL workloads,

I thought at first these were just FPGAs implementing different logic but now I understand that many of these include ASICs as well. Most of these incorporate a standard von Neumann CPU (mostly ARM) along with special purpose hardware to speed up certain types of processing (such as low latency data transfer, encryption, compression, etc.).

What happened?

It’s pretty easy to understand why non-von Neumann computing architectures should come about. Witness all those new AI-ML-DL chips that have become available. And why these would be implemented outside the normal X86-ARM CPU environment.

But SPU, DPUs and computational storage, all have typical von Neumann CPUs (mostly ARM) as well as other special purpose logic on them.

Why?

I believe there are a few reasons, but the main two are that Moore’s law (every 2 years halving the size of transistors, effectively doubling transistor counts in same area) is slowing down and Dennard scaling (as you reduce the size of transistors their power consumption goes down and speed goes up) has stopped almost. Both of these have caused major CPU chip manufacturers to focus on adding cores to boost performance rather than just adding more transistors to the same core to increase functionality.

This hasn’t stopped adding instruction functionality to each CPU, but it has slowed considerably. And single (core) processor speeds (GHz) have reached a plateau.

But what it has stopped is having the real estate available on a CPU chip to absorb lots of additional hardware functionality. Which had been the case since the 1980’s.

I was talking with a friend who used to work on math co-processors, like the 8087, 80287, & 80387 that performed floating point arithmetic. But after the 486, floating point logic was completely integrated into the CPU chip itself, killing off the co-processors business.

Hardware design is getting easier & chip fabrication is becoming a commodity

We wrote a post a couple of weeks back talking about an open foundry (see HW vs. SW innovation round 5 noted above)that would take a hardware design and manufacture the ASICs for you for free (or at little cost). This says that the tool chain to perform chip design is becoming more standardized and much less complex. Does this mean that it takes less than 18 months to create an ASIC. I don’t know but it seems so.

But the real interesting aspect of this is that world class foundries are now available outside the major CPU developers. And these foundries, for a fair but high price, would be glad to fabricate a 1000 or million chips for you.

Yes your basic state of the art fab probably costs $12B plus these days. But all that has meant is that A) they will take any chip design and manufacture it, B) they need to keep the factory volume up by manufacturing chips in order to amortize the FAB’s high price and C) they have to keep their technology competitive or chip manufacturing will go elsewhere.

So chip fabrication is not quite a commodity. But there’s enough state of the art FABs in existence to make it seem so.

But it’s also physics

The extremely low latencies that are available with NVMe storage and, higher speed networking (100GbE & above) are demanding a lot more processing power to keep up with. And just the physics of how long it takes to transfer data across a distance (aka racks) is starting to consume too much overhead and impacting other work that could be done.

When we start measuring IO latencies in under 50 microseconds, there’s just not a lot of CPU instructions and task switching that can go on anymore. Yes, you could devote a whole core or two to this process and keep up with it. But wouldn’t the data center be better served keeping that core busy with normal work and offloading that low-latency, realtime (like) work to a hardware accelerator that could be executing on the network rather than behind a NIC.

So real time processing has become faster, or rather the amount of time to execute CPU instructions to switch tasks and to process data that needs to be done in realtime to keep up with faster line speed is becoming shorter.

So that explains DPUs, smart NICS, DPUs, & SPUs. What about the other hardware accelerator cards.

  • AI-ML-DL is becoming such an important and data AND compute intensive workload that just like GPUs before them, TPUs & IPUs are becoming a necessary evil if we want to service those workloads effectively and expeditiously.
  • Computational storage is becoming more wide spread because although data compression can be easily done at the CPU, it can be done faster (less data needs to be transferred back and forth) at the smart Drive.

My guess we haven’t seen the end of this at all. When you open up the possibility of having a long term business model, focused on hardware accelerators there would seem to be a lot of stuff that needs to be done and could be done faster and more effectively outside the core CPU.

There was a point over the last decade where software was destined to “eat the world”. I get a lot of flack for saying that was BS and that hardware innovation is really eating the world. Now that hardtware innovation’s back, it seems to be a little of both.

Comments?

Photo Credits:

Moore’s law is still working with new 2D-electronics, just 1nm thin

ncomms8749-f1This week scientists at Oak Ridge National Laboratory have created two dimensional nano-electronic circuits just 1nm tall (see Nature Communications article). Apparently they were able to create one crystal two crystals ontop of one another, then infused the top that layer with sulfur. With that as a base they used  standard scalable photolitographic and electron beam lithographic processing techniques to pattern electronic junctions in the crystal layer and then used a pulsed laser evaporate to burn off selective sulfur atoms from a target (selective sulferization of the material), converting MoSe2 to MoS2. At the end of this process was a 2D electronic circuit just 3 atoms thick, with heterojunctions, molecularly similar to pristine MOS available today, but at much thinner (~1nm) and smaller scale (~5nm).

In other news this month, IBM also announced that they had produced working prototypes of a ~7nm transistor in a processor chip (see NY Times article). IBM sold off their chip foundry a while ago to Global Foundries, but continue working on semiconductor research with SEMATECH, an Albany NY semiconductor research consortium. Recently Samsung and Intel left SEMATECH, maybe a bit too early.

On the other hand, Intel announced they were having some problems getting to the next node in the semiconductor roadmap after their current 14nm transistor chips (see Fortune article).  Intel stated that the last two generations took  2.5 years instead of 2 years, and that pace is likely to continue for the foreseeable future.  Intel seems to be spending more research and $’s creating low-power or new (GPUs) types of processing than in a mad rush to double transistors every 2 years.

480px-Comparison_semiconductor_process_nodes.svgSo taking it all in, Moore’s law is still being fueled by Billion $ R&D budgets and the ever increasing demand for more transistors per area. It may take a little longer to double the transistors on a chip, but we can see at least another two generations down the ITRS semiconductor roadmap. That is, if the Oak Ridge research proves manufacturable as it seems to be.

So Moore’s law has at least another generation or two to run. Whether there’s a need for more processing power is anyone’s guess but the need for cheaper flash, non-volatile memory and DRAM is a certainty for as far as I can see.

Comments?

Photo Credits: 

  1. From “Patterned arrays of lateral heterojunctions within monolayer two-dimensional semiconductors”, by Masoud Mahjouri-Samani, Ming-Wei Lin, Kai Wang, Andrew R. Lupini, Jaekwang Lee, Leonardo Basile, Abdelaziz Boulesbaa, Christopher M. Rouleau, Alexander A. Puretzky, Ilia N. Ivanov, Kai Xiao, Mina Yoon & David B. Geohegan
  2. From Comparison semiconductor process nodes” by Cmglee – Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Comparison_semiconductor_process_nodes.svg#/media/File:Comparison_semiconductor_process_nodes.svg

Insecure SHA-1 imperils Internet security, PKI, and most password systems

safe 'n green by Robert S. Donovan (cc) (from flickr)
safe ‘n green by Robert S. Donovan (cc) (from flickr)

I suppose it’s inevitable but surprising nonetheless.  A recent article Faster computation will damage the Internet’s integrity in MIT Technology Review indicates that by 2018, SHA-1 will be crackable by any determined large  organization. Similarly, just a few years later,  perhaps by 2021 a much smaller organization will have the computational power to crack SHA-1 hash codes.

What’s a hash?

Cryptographic hash functions like SHA-1 are designed such that, when a string of characters is “hash”ed they generate a binary value which has a couple of great properties:

  • Irreversibility – given a text string and a “hash_value” generated by hashing “text_string”, there is no way to determine what the “text_string” was from its hash_value.
  • Uniqueness – given two or more text strings, “text_string1” and “text_string2” they should generate two unique hash values, “hash_value1” and “hash_value2”.

Although hash functions are designed to be irreversible that doesn’t mean that they couldn’t be broken via a brute force attack. For example, if one were to try every known text string, sooner or later one would come up with a “text_string1” that hashes to “hash_value1”.

But perhaps even more serious, the SHA-1 algorithm is prone to hash collisions  which makes fails the uniqueness property above.  That is, there are a few “text_string1″s that hash to the same “hash_value1”.

All this wouldn’t be much of a problem except that with Moore’s law in force and continuing for the next 6 years or so we will have processing power in chips capable of doing a brute force attack against SHA-1 to find text_strings that match any specific hash value.

So what’s the big deal?

Well it turns out that SHA-1 algorithms underpin almost all secure data transmissions today. That is, most Public-key infrastructure (PKI) depend on SHA-1 to sign digital certificates.  And although that’s pretty bad, what’s even worse is that Secure Socket Layer/Transport Layer Security (SSL/TLS) used by “https://” websites the world over also depend on SHA-1 to send key information used to encrypt/decrypt secure Internet transactions.

On top of all that, many of today’s secure systems with passwords, use SHA-1 to hash passwords and instead of storing actual passwords in plain-text on their password files, they only store the SHA-1 hash of the passwords.  As such, by 2021, anyone that can read the hashed password file can retrieve any password in plain text.

What all this means is that by 2018 for some and 2021 or thereabouts for just about anybody else, todays secure internet traffic, PKI and most system passwords will no longer be secure.

What needs to be done

It turns out that NSA knew about the failings of SHA-1 quite awhile ago and as such, NIST released SHA-2 as a new hash algorithm and its functional replacement.  Probably just in time, this month, NIST announced a winner for a new SHA-3 algorithm as a functional replacement for SHA-2.

This may take awhile, what needs to be done is to have all digital certificates that use SHA-1, be invalidated with new ones generated using SHA-2 or SHA-3.  And of course, TLS and SSL Internet functionality all have to be re-coded to recognize and use SHA-2 or SHA-3, instead of SHA-1.

Finally, for most of those password systems, users will need to re-login and have their password hashes changed over from SHA-1 to SHA-2 or SHA-3.

Naturally, in order to use SHA-2 or SHA-3 many systems may need to be upgraded to later levels of code.  Seems like Y2K all over again, only this time it’s security that’s going to crash.  It’s good to be in the consulting business, again.

~~~~

But the real problem IMHO, is Moore’s law.  If it continues to double processing power/transistor density every two years or so, how long before SHA-2 or SHA-3 succumb to same sorts of brute force attacks?  Given that, we appear destined to change hashing, encryption and other security algorithms every decade or so until Moore’s law slows down or god forbid, stops altogether.

Comments?

 

No-power sensors surface due to computational energy efficiency trends

Koomeys_law_graph,_made_by_Koomey (cc) (from wikipedia.org)
Koomeys_law_graph,_made_by_Koomey (cc) (from wikipedia.org)

Read an article The computing trend that will change everything in MIT’s TechReview today  about the trend in energy consumption per unit of computation.

Along with Moore’s law dictating that  transister density doubles every 18 to 24 months, there is Koomey’s law that states that computational power efficiency or computations per watt, will double every 1.57 yrs.

Koomey’s law has made today’s smart phones and tablets possible.  If your current laptop were computing at the power efficiency of 1991 computers their batteries would last ~2.5 seconds.

No-power sensors?!

But this computing efficiency trend is giving rise to no-power sensors/devices, or computational sensors without batteries.  These new sensors gather electrical energy from “ambient radio waves” in the air, and by doing so harvest enough electricity to power computations and as such, don’t need batteries.

Such devices can gather ~50μwatts of power from a TV transmitter just 2.5 miles away.  Most calculators only use ~5μwatts and digital thermoters around 1μwatt, so 50 is enough to do some reasonable amounts of sensing work.

But the exciting part is that as Koomley’s law continues, the amount of work that 50μwatts or even 5μwatts supports doubles again every 1.6 years.  For example, the computational power of today’s laptops will only consume infinitesimal amounts of power in ~two decades time.  Thus, no-power-sensors of 2034 will be very smart indeed.

“Any sufficiently advanced technology is indistinguishable from magic”, Arthur C. Clarke

Data transmission efficiency not keeping up

Nonetheless, the fact that computational efficiency is doubling every 1.6 years doesn’t mean the data transmission efficiency is doing the same.  Which means that for the foreseeable future, data transmission may remain a crucial bottleneck for no-power sensors.

However, computational increases can somewhat compensate for data transmission limitations by more efficient encoding, compression, etc. But there are limits as to what can be accomplished within any data transmission technology.

Nanodata

Thus, for the foreseeable future, although sensors will be able to do lots more computations, what they transmit to the outside world may remain limited.  Giving rise to smart, no-power sensors providing very miniscule data packages.

One term coined to describe such limited external data transmission from no-power computationally intense sensors is nanodata.   Because of their ability to exist outside the power grid, it is very likely that the future sensor cloud or internet-of-things will be primarily comprised of such nanodata devices.

~~~~
I was at SNW last week and there was some discussion of “little data” or data in corporate databases, in contrast with big data.  But nanodata is something I had never heard of before today.

So now we have big data, little data, and nanodata.  Seems like are missing a few steps here…