Free Novel Read

Worm: The First Digital World War Page 7


  It was just getting started.

  4

  An Ocean of Suckers

  HAVING MUTANT POWERS DOESN’T GIVE US

  THE RIGHT TO DOMINATE OTHERS.

  —The X-Men Chronicles

  The idea of an infectious computer “worm” is lifted from the pages of science fiction. More than a decade before the Internet was born, the British sci-fi writer John Brunner invented the idea of a viral code that could invade and sabotage it in his 1975 novel The Shockwave Rider.

  With startling foresight, at a time when Bill Gates was taking a leave of absence from Harvard to cofound “Micro-soft,” Brunner imagined a dystopian twenty-first-century world wired into a global “data-net,” controlled by a malicious state. His hero, a gifted hacker named Nick Haflinger, creates a program he calls a “tapeworm” that can infiltrate the data-net, spread on its own, and ultimately subvert the government. “My newest—my masterpiece—breeds by itself,” he boasts. In Haflinger’s case, much as with the creators of Wikileaks, the data-net is directed to break into government files and spill state secrets. Brunner chose to call his techno-weapon a “tapeworm” because the code, like the creature, consisted of a head attached to a string of segments that were each capable of regenerating the whole.

  “What I turned loose in the net yesterday was the father and mother of all tapeworms . . . it can’t be killed,” he says. “It’s indefinitely self-perpetuating so long as the net exists. . . . Incidentally, though, it won’t expand to indefinite size and clog the net for other use. It has built-in limits. . . . Though I say so myself, it’s a neat bit of work.”

  Brunner’s ideas about the coming digital world were clever, but as a prophet he was strictly derivative. His vision was of a piece with those of George Orwell, Aldous Huxley, Philip K. Dick, and others who foresaw the totalitarian movements of the twentieth century as portents of a dark future, where all power would be concentrated in the hands of an oppressive state. Each of these writers predicted that technology would be an important tool of state oppression—for Orwell it was TV, for Huxley it was psychotropic drugs, for Dick it was both of the above combined with bioengineering. For Brunner it was the computer, or, more correctly, computer networks. The ideas in The Shockwave Rider, particularly those about the coming age of digital interconnection, were largely based on futurist Alvin Toffler’s book Future Shock. They were so prescient that computer programmers recalled the “tapeworm” a few years later when they began devising the first real worms in research labs.

  The fears Orwell, Huxley, Dick, and Brunner vividly articulated in their fiction still have adherents, and have inspired some striking and successful Hollywood films, but so far they have not panned out, certainly not in the case of computer networks. The structure of the Internet—or lack of structure—has worked against centralized state control. The thing has a billion heads. It is defiantly ground-up. Since it has become a factor in world events, governments everywhere have found it harder to keep secrets and to escape the public’s gaze. The “data-net” has proved so far to be a tool less of oppression than of liberation. And the architects of worms and viruses aren’t the heroic rebels battling state tyranny imagined by Brunner, but nihilists and common criminals.

  In the mid-1970s, the only large computer networks that existed were at university, business, or government centers. Many of the young computer geeks who would create the Internet age, and in some cases amass great fortunes, first stumbled into the larger potential for such networks by borrowing processing time (with or without permission) to play games or show off their hacking skills. Gates and Allen used the computer provided to privileged students at Lakeside Prep, and when they outgrew it they persuaded the school to lease time for them on an outside one. There were few barriers to access, because computing power and connectivity were seen as entirely beneficial. Openness was essential to the movement’s appeal.

  The first sour notes in this techno-Eden were simple devilry. The early computer networks were plagued by savvy outlaws, “cyberpunks,” who used their knowledge of operating systems to play pranks, to write juvenile slogans across the monitors of compromised computers the way graffiti artists scrawl their initials on urban walls. There was a playful quality to such efforts, undertaken often just to show off the hacker’s skill. The term was not entirely derogatory. Hackers took some pride in the designation, and had fans. Most of what they did was harmless. To this day the grungy long-haired geek living in his parents’ basement, fueled by pizza, soda, and junk food—the picture first painted by Weizenbaum—has become a cliché in Hollywood, bedeviling the powerful with his antisocial genius, thwarting malevolent syndicates, running rings around the “official” experts. These pioneer miscreants came to symbolize the anarchic spirit of the Internet movement, the maverick genius at war with the establishment.

  But as the Inernet has rapidly evolved, so have its predators. The newness of computer networks, and their global nature, posed novel problems for law enforcement. In many jurisdictions, preying on people in cyberspace is not officially criminal, and often in places where it is, there is little urgency in prosecuting it. In his 1989 best seller, The Cuckoo’s Egg, Cliff Stoll told the story of his stubborn, virtually single-handed hunt for an elusive hacker in Germany who was sneaking around inside Stoll’s computer network at the Lawrence Berkeley National Laboratory and using it as a back door to U.S. Defense Department computers. The subject was hunted down but never prosecuted, in part because there were no clear laws against such behavior. For many people, The Cuckoo’s Egg introduced the netherworld of gamesmanship that still defines computer security. Stoll’s hacker never penetrated the most secret corners of the national-security net, and even relatively serious breaches like the one Stoll described were still more of a nuisance than a threat. A group calling itself the Legion of Doom had a good run in the 1990s, invading computer networks and showing off while not doing much damage. The group published a technical newsletter to advertise its exploits, and members gave themselves colorful comic-book-style monikers. There were other hacker groups like it, including the New York–based Masters of Deception. Some members of these clubs were hauled in and prosecuted by federal authorities in the 1990s, considerably upping the price of such stunts. Little of the old glamour still attaches itself to serious hackers; the game has evolved into something far bigger, smarter, and more menacing.

  Real trouble arrived with the big DDoS (Distributed Denial of Service) attacks of the 1990s, which aimed tidal waves of service requests at certain websites. Instead of showcasing the skill of a hacker, the purpose of a DDoS attack was wholly malicious, sometimes political, often vengeful. A DDoS attack capitalizes on the openness of Internet traffic to simply overwhelm the capacity of an organization to respond. Those orchestrating such attacks employed computer networks to automatically generate request after request, multiple times per second, until they brought to a halt the servers for credit card companies, banks, the White House, government agencies, the Holocaust Historical Museum, political parties, universities, and any other vulnerable website that was deemed offensive. The worst DDoS attack came on October 21, 2002, when the Internet’s thirteen root servers were hit simultaneously. This was clearly an effort to bring down not just individual websites, but the Internet itself. The root servers survived the hourlong assault, but only barely. It forced such root servers to invest in heavily redundant stores of memory, enough to absorb massive potential attacks.

  This event was important. It was a sobering demonstration for those paying attention, which is to say, the Tribe. This was a very small, select group of people. The vast majority of Internet users remained oblivious. So long as Google and YouTube and Facebook kept humming along, everyone else was happy. By the twenty-first century, the Internet was a given. It was there on your phone, in your car, on your iPad. It was everywhere, through either a WiFi or a phone connection. There were myths about its invulnerability. It could not be shut down, because it lacked any kind of central
control or routing system, or so the story went . . . and there was some truth to that belief. The way the Internet routed information was entirely new, an advance over all previous communications systems, and one that was inherently sturdier.

  Finding your way on the Internet isn’t as direct as, say, routing a telephone call. Telephone lines carry the electrical impulses of an outgoing call along wires down the shortest available path to the number being called. The big difference between the Internet and telephone networks, or the interstate highway system, for that matter, is that traffic does not flow down clearly defined, predictable pathways. There are detailed printed maps of telephone networks and highways, and the paths taken by calls and vehicles can always be clearly traced. One of the major conceptual breakthoughs that enabled the creation of the Internet was to do away with this clarity.

  The idea was called “packet-switching.” The concept apparently came almost simultaneously to two cold war scientists: Donald Davies, working at Britian’s National National Physical Laboratory; and an American immigrant scientist from Poland named Paul Baran, at the RAND Corporation in the late 1960s. Both researchers were trying to invent a new, more robust communications network. Baran was specifically tasked with designing one that might withstand a nuclear attack. Davies was just just looking for an improvement over the existing telephone switching networks, but there is little doubt that experience of prolonged German aerial bombardment during World War II lurked somewhere in the back of his mind. Traditional phone networks had critical trunk lines and central switching stations that, if destroyed, could effectively short-circuit the entire network. Both Baran and Davies wanted a system that could survive such blows, that could not be taken out. The alternative that seemed to work best was modeled after the human brain.

  Neurologists knew that after severe head injuries, the brain began to power up alternative neural pathways that avoided areas of damaged or destroyed cells. Often patients completely recovered functions that, at first glance, might have seemed hopelessly lost. The brain seemed to possess enough built-in redundancy to compensate for even seemingly catastrophic blows; abandoning the most direct pathway would not have worked for telephone grids, because the farther the message traveled through the network’s wires and switches, and the more times its direction shifted, the more degraded the signal became. Digital messages, on the other hand, messages composed in the ones and zeros of computer object code, never degraded. They could bounce around indefinitely without losing their integrity, and still arrive pristine. There was another advantage to the digital approach. Since messages were broken down by the computer into long lists of ones and zeros, why not break them down into smaller bits, or “packets,” and then reassemble them at the end point? That way even a message as simple as an email might take dozens of different pathways to its destination. It was more like teleportation than simple transmission: You disassembled the data into many distinct packets; cast them out on the network, where each packet found its own way; and then reassembled the data at the end point, all in microseconds, which are perceived by humans as real time. No delay. Diagrams of the proposed “packet-switching” network looked more like drawings of interlinked brain cells than a road map or a telephone grid. Such a network required minimal central planning, because each new computer node that connected just enlarged and strengthened the web. You could not destroy such a network easily, because even if you managed to take out a large chunk, traffic would automatically seek out surviving nodes.

  This gave the Internet an especially hardy nature—a fact that buttressed the anarchic theology of the techno-utopians. But it was not invulnerable, as the massive 2002 DDoS attack demonstrated. The system’s root servers were critical, because all Internet traffic relied on at least one of the thirteen. If you could mount a sufficiently powerful assault, it was theoretically possible to overwhelm all thirteen and bring even this very resilient global network to a dead stop. It would take a mighty computer to mount an attack like that, or one very, very large botnet. By the turn of the century, botnets were the coming thing . . .

  . . . and they were getting easier to make.

  In the beginning, networks were created by wiring computers together manually, but as the infrastructure of the Internet solidified, interconnection was a given. Almost all computers today are connected to a network, even if only to their local ISP. So if you were clever enough to make all the computers on a network work together, you could effectively assemble yourself a supercomputer. There was even a poorly guarded infrastructure already in place to facilitate such work. Techies had long been using Internet Relay Chat (IRC) channels to maintain constant real-time dialogue with colleagues all over the world. IRC offered a platform for global communication that was controlled from a single point, the channel’s manager, and was used to host open-ended professional discussions, laboratory projects, and teleconferences before desktop applications for such things became widely known or available. Members of a group could use the channel to communicate directly and privately to one another but could also broadcast messages to the entire membership. Some of the earliest benign “bots” were crafted by IRC channel controllers to automatically monitor or manage discussion. The idea wasn’t completely new. Computer operators had long written programs to automate routine tasks on their networks. These early bots were useful and harmless. In the late 1970s, a Massachusetts researcher named Bob Thomas created a silly worm he called “Creeper,” which would display a message on infected machines: “I’m the Creeper, catch me if you can!” Creeper was more frog than worm. It hopscotched from target to target, removing itself from each computer as it jumped to the next. It was designed just to show off a little, and to make people laugh.

  But even those engaged in noble pursuits sometimes don’t play nice. Chat room members sometimes chose to commandeer these channels, to, in effect, become alternate controllers. One very effective way to hijack an IRC channel (to, in effect, create a botnet) was to bypass individual computer operators with a worm that could infect all the machines. The author seeded the network with his code and linked them to himself. The official manager of the channel would have no idea his network had been hijacked. The usurper could then marshal the power of the network to mount a DDoS attack against those with whom he disagreed or of whom he disapproved, or he could simply explore the network all he wished, collecting information from individual computers, spying, or issuing commands of his own. It was a tool ready-made for more nefarious purposes.

  On Wednesday, November 4, 1988, as voters went to the polls nationwide to choose Vice President George H. W. Bush over Governor Michael Dukakis of Massachusetts for the White House, a headline in the New York Times read:

  “VIRUS” IN MILITARY COMPUTERS

  DISRUPTS SYSTEMS NATIONWIDE

  The writer, John Markoff, reported:

  In an intrusion that raises questions about the vulnerability of the nation’s computers, a Department of Defense network has been disrupted since Wednesday by a rapidly spreading “virus” program apparently introduced by a computer science student.

  . . . By late yesterday afternoon computer experts were calling the virus the largest assault ever on the nation’s computers.

  “The big issue is that a relatively benign software program can virtually bring our computing community to its knees and keep it there for some time,” said Chuck Cole, deputy computer security manager at Lawrence Livermore Laboratory in Livermore, Calif., one of the sites affected by the intrusion. “The cost is going to be staggering.”

  For those inclined to conspiracy theories, it was noted with particular interest that the twenty-three-year-old author of the “virus,” Robert Tappan Morris, a Cornell University graduate student, was the son of the chief scientist at the National Computer Security Center, a division of the National Security Agency. The younger Morris had grown up playing with computers. Typical of those in the hacking community, he had a fluency with networks and network security (such as it existed at that
time, which is to say, nearly always not at all). By all accounts, he cooked up the worm on his own. Markoff reported that the grad student’s creation had clogged computer networks nationwide; in 1988, these networks still mostly belonged to the military, corporations, and universities. Cliff Stoll, then working as a computer security expert at Harvard University, told the newspaper, “There is not one system manager who is not tearing his hair out. It is causing enormous headaches.”

  The managers were annoyed, certainly, but also clearly impressed. More than one programmer described the Morris Worm as “elegant.” It consisted of only ninety-nine lines of code, and had a number of clever ways to invade computers, one of them by causing a buffer overflow (remember that technique?) in a file-sharing application of the ARPANET. Morris launched his worm from an IP address at Harvard University to cover his tracks at Cornell, expecting it to evade detection in the computers it infected. As smart as it was, the worm had a fatal flaw. In an effort to protect itself from being flushed out of a network, the code was designed to reproduce itself wantonly, and, much to Morris’s dismay, ended up spiraling out of control. When he realized that it was running amok, he said he tried to send out instructions to kill it, but the networks were so jammed with his worm’s traffic that the corrective could not get out.

  Once it malfunctioned, Morris never tried to evade responsibility. He was later convicted under a new Federal Computer Fraud and Abuse Act, fined $10,000, and sentenced to three years of probation and four hundred hours of community service. Perhaps a more lasting punishment has been lifelong notoriety, a quasi-hero status among those who admire acts of cybervandalism. He is today an associate professor at MIT, and insists he had intended nothing more than to quietly infect computers in order to count them. Prosecutors charged that he had, in fact, designed the worm to “attack” computers owned by Sun Microsystems, Inc., and the Digital Equipment Corporation, two of the institutions hardest hit.