On Tuesday 12th August, major websites around the world became unavailable, or slowed down just enough to become un-usable. The cause for this was not a cyber attack but a US based ISP, Verizon, dividing some groups of IP addresses into smaller ones  combined with an arbitrary routing table size limitation in older Cisco routers .
The scale of the problems caused by such a routine administrative task, places an emphasis on the fact that the Internet was not planned, but just grew from an adhoc set of fixes for the immediate problems. Some of these fixes do not cooperate nicely or have unforeseen consequences. So why is the existing Internet so hard to fix?
- The Internet is essentially a gigantic, flat layer (all devices with a public IP address are part of that layer). It works with old technology, but it is very hard to upgrade because it is just too big and too many people have to make changes to make this upgrade. As a consequence, the “old technology” limitations cannot be fixed and more and more patches are required to address problems caused by these limitations. This makes the Internet more complex and more brittle (as the article summarises )
“The internet – you have no idea. It’s held together with chewing gum and string”.
- Since everything is on the same layer, scalability can only be achieved by brute force. You need more memory in the routers, faster CPUs to cope with the increased routing tables size and their maintenance. Indeed IPv6 also uses a single addressing layer, so just temporarily fixes the IPv4 address exhaustion problem without “fixing” any routing table size issues (in fact it makes it worse in the long run, since routers will have to store more entries in the routing table – and each entry will consume more memory since IPv6 addresses are longer).
- The single layer also causes lack of isolation between the different “networks” in the Internet. Everyone operates in a single global address space and there are examples of where this has gone wrong. In 2008, YouTube became globally unreachable after a Pakistani Internet service provider (ISP) altered a route in an attempt to block YouTube access in Pakistan . In 2013, Renesys reported  that some connections between Denver USA and Denver USA, were routed via Iceland. So this lack of isolation also opens the door for less legitimate “mistakes” or traffic hijacking.
RINA is a network architecture that tries to capture the general principles in computer networking that apply to everything. As such it does not try to “fix” the Internet with a superficial layer of patches, but rather re-addresses some of the design limitations to make inter-networking more reliable, predictable and cheaper (which is key since computer networks have become a critical infrastructure). So why could this particular issue not happen with RINA:
- There is no single “global” address space in RINA, there is an address space per “layer” or DIF in RINA parlance. An administrative misconfiguration is treated as a “layer” / DIF outage, and routed around, that is if you muck up the routing in a layer, then the layers above it will just find an alternative route, as fast as the signalling will allow.
- Added to this, a service on a “node” can be multi-homed, i.e. have multiple points of attachment to the network, so it would take a series of misconfigurations (on several layers/DIFs) to make the service unavailable. Thus very unlikely to accidentally happen in practice.
- There is nothing preventing an arbitrary size limit being “coded” into the routing table by a specific equipment vendor. However because there is a routing table per layer (per DIF), the limit is less likely to be exceeded, and if it does, step 1 kicks in to reroute around the problem.
Address assignment (and other aspects of network operations) should be automated to minimize human errors, and allow a higher degree of checking and validation. Sometimes, we just have to address the fundamental cause of these problems, rather than invent overly complex ways to try avoiding their consequences. There is no guarantee that adding another workaround may not cause further problems, and a prediction of when those problems may occur is down to pure chance.
Another consequence of the availability of multiple layers is that there is no need to “stop the Internet” or “have a flag day” in order to start adopting RINA when the technology is mature enough. The current Internet is just seen as another layer, therefore RINA can be deployed over, under and next to the Internet, without having to change it.
 Accountability in the Future Internet, http://dl.acm.org/citation.cfm?doid=2663191.2644146