Another Y2K-like problem, this time Internet routers are the problem

Read an article today in Wired about The Internet has grown too big for its aging infrastructure showing up as a serious problem that’s soon to be more widespread.

This Y2K-like problem is associated with the Border Gateway Protocol  (BGP) routing tables entries which represent IP address prefixes.  Internet routers keep BGP tables in Tertiary Content Addressable Memory (TCAM, sort of like a virtual memory page table only for router addresses) and there are physical limits as to how many BGP entries will fit into any specific Internet router.  Some routers crash when they exceed their TCAM limit and others just ignore the BGP entries that exceed their limits – neither approach seems workable long term.

Apparently we are approaching one of those hard and fast limits, at least for older routers, as the BGP routing tables reach over 512K entries.  As of May 2014, there were in excess of 500,000 BGP prefixes (table entries).

Smoking gun points to …

It appears that this time Verizon was the perpetrator. Yesterday they added 15K BGP entries to the Internet BGP table, kicking some routers over their 512K limit. This was no doubt in anticipation of some growth in Internet addresses on their networks.

The result was that LiquidWeb’s network went down. Supposedly they have an older Cisco 7600 router and the latest addition to BGP entries exceeded its TCAM capacity, crashing their router. Oops!

Verizon quickly withdrew the offending 15K BGP entry addition and things seem back to normal for the moment. But we are once again close to some arbitrary computerized limit. Only this problem won’t happen at midnight December 31st. It won’t take that long to exceed the current BGP entry limits again and next time it might not be that easy to back out.

But it’s almost like there’s no stopping it…

Just guessing here but these types of routers probably have similar limits for BGP entries exceeding 1024K entries, 2048K, 4096K, etc. With the number of internet connected devices growing exponentially, especially with the Internet of Things, I predict similar problems over the coming years. Indeed, we went from ~400K to ~500K BGP entries in just under two years and the rate of growth seems to be accelerating.

It’s really just a matter of time before even todays routers run out of TCAM slots. Y2K-like, only this time there’s no way to stop it from happening again and again in the future.  I suppose it would be better if the routers just ignored the new BGP entries rather than crashing but that would seem to put some segment of Internet routers out of their reach?  There’s got to be a way to intelligently ignore some updates or summarize prefix updates when a router runs out of TCAM entries.

Welcome to the new 512K problem.

~~~~

Comments?

Photo Credit(s): Cisco 7609 @ itb for INHERENT by Affan Basalamah

 

 

 

Insecure SHA-1 imperils Internet security, PKI, and most password systems

safe 'n green by Robert S. Donovan (cc) (from flickr)
safe ‘n green by Robert S. Donovan (cc) (from flickr)

I suppose it’s inevitable but surprising nonetheless.  A recent article Faster computation will damage the Internet’s integrity in MIT Technology Review indicates that by 2018, SHA-1 will be crackable by any determined large  organization. Similarly, just a few years later,  perhaps by 2021 a much smaller organization will have the computational power to crack SHA-1 hash codes.

What’s a hash?

Cryptographic hash functions like SHA-1 are designed such that, when a string of characters is “hash”ed they generate a binary value which has a couple of great properties:

  • Irreversibility – given a text string and a “hash_value” generated by hashing “text_string”, there is no way to determine what the “text_string” was from its hash_value.
  • Uniqueness – given two or more text strings, “text_string1” and “text_string2” they should generate two unique hash values, “hash_value1” and “hash_value2”.

Although hash functions are designed to be irreversible that doesn’t mean that they couldn’t be broken via a brute force attack. For example, if one were to try every known text string, sooner or later one would come up with a “text_string1” that hashes to “hash_value1”.

But perhaps even more serious, the SHA-1 algorithm is prone to hash collisions  which makes fails the uniqueness property above.  That is, there are a few “text_string1″s that hash to the same “hash_value1”.

All this wouldn’t be much of a problem except that with Moore’s law in force and continuing for the next 6 years or so we will have processing power in chips capable of doing a brute force attack against SHA-1 to find text_strings that match any specific hash value.

So what’s the big deal?

Well it turns out that SHA-1 algorithms underpin almost all secure data transmissions today. That is, most Public-key infrastructure (PKI) depend on SHA-1 to sign digital certificates.  And although that’s pretty bad, what’s even worse is that Secure Socket Layer/Transport Layer Security (SSL/TLS) used by “https://” websites the world over also depend on SHA-1 to send key information used to encrypt/decrypt secure Internet transactions.

On top of all that, many of today’s secure systems with passwords, use SHA-1 to hash passwords and instead of storing actual passwords in plain-text on their password files, they only store the SHA-1 hash of the passwords.  As such, by 2021, anyone that can read the hashed password file can retrieve any password in plain text.

What all this means is that by 2018 for some and 2021 or thereabouts for just about anybody else, todays secure internet traffic, PKI and most system passwords will no longer be secure.

What needs to be done

It turns out that NSA knew about the failings of SHA-1 quite awhile ago and as such, NIST released SHA-2 as a new hash algorithm and its functional replacement.  Probably just in time, this month, NIST announced a winner for a new SHA-3 algorithm as a functional replacement for SHA-2.

This may take awhile, what needs to be done is to have all digital certificates that use SHA-1, be invalidated with new ones generated using SHA-2 or SHA-3.  And of course, TLS and SSL Internet functionality all have to be re-coded to recognize and use SHA-2 or SHA-3, instead of SHA-1.

Finally, for most of those password systems, users will need to re-login and have their password hashes changed over from SHA-1 to SHA-2 or SHA-3.

Naturally, in order to use SHA-2 or SHA-3 many systems may need to be upgraded to later levels of code.  Seems like Y2K all over again, only this time it’s security that’s going to crash.  It’s good to be in the consulting business, again.

~~~~

But the real problem IMHO, is Moore’s law.  If it continues to double processing power/transistor density every two years or so, how long before SHA-2 or SHA-3 succumb to same sorts of brute force attacks?  Given that, we appear destined to change hashing, encryption and other security algorithms every decade or so until Moore’s law slows down or god forbid, stops altogether.

Comments?