Free Novel Read

Worm: The First Digital World War Page 2


  Wherever computer centers have become established, that is to say, in countless places in the United States, as well as in all other industrial regions of the world, bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at their computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: Cokes, sandwiches. If possible, they sleep on cots near the computer. But only for a few hours—then back to the console or printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and the world in which they move. They exist, at least when so engaged, only through and for computers. These are computer bums, compulsive programmers. They are an international phenomenon.

  The Geek Tribe today has broadened to include a wider and more wholesome variety of characters—Phil played a lot of basketball in high school and actually went out with girls—and there is no longer any need need for “printouts” to obsess over—everything is on-screen—but the Tribe remains international and utterly obsessed, linked 24/7 by email and a host of dedicated Internet chat channels. In one sense, it is strictly egalitarian. You might be a lonely teenager with pimples in some suburban basement, too smart for high school, or the CEO of some dazzling Silicon Valley start-up, but you can join the Tribe so long as you know your stuff. Nevertheless, its upper echelons remain strictly elitist; they can be as snobby as the hippest Soho nightclub. Some kind of sniff test applies. Phil himself, for instance, was kept out of the inner circle of geeks fighting this new worm for about a month, even though he and his team at SRI had been at it well before the Cabal came together, and much of the entire effort rested on their work. Access to a mondo mainframe or funding source might gain you some cachet, but real traction comes only with savvy and brainpower. In a way, the Tribe is as virtual as the cyber-world itself. Many members have known each other for years without actually having ever met in, like, real life. Phil seems happiest here, in the glow of his three monitors, plugged into his elite global confederacy of the like-minded.

  The world they inhabit didn’t even exist, in any form, when Phil was born in 1966. At that point the idea of linking computers together was just that, an idea, and a half-baked one. It was the brainchild of a group of forward-thinking scientists at the Pentagon’s Advanced Research Projects Agency (ARPA). The agency was housed in and funded by the Pentagon, and this fact has led to false stories about the Internet’s origins, that it was official and military and therefore inherently nefarious. But ARPA was one of the least military enterprises in the building. Indeed, the agency was created and sustained as a way of keeping basic civilian research alive in an institution otherwise entirely focused on war. One of the things ARPA did was underwrite basic science at universities, supporting civilian academic scientists in projects often far afield from any obvious military application. Since at that time the large laboratories were using computers more and more, one consequence of coordinating ARPA’s varied projects was that it accumulated a variety of computer terminals in its Pentagon offices, each wired to mainframes at the different labs. Every one of these terminals was different. They varied in appearance and function, because each was a remote arm of the hardware and software peculiar to its host mainframe. Each had its own method of transferring and displaying data. ARPA’s Pentagon office had begun to resemble the tower of Babel.

  Computers were then so large that if you bought one, you needed a loading dock to receive it, or you needed to lift off the roof and lower it into position with a crane. Each machine had its own design and its own language and, once it had been put to work in a particular lab, its own culture, because each was programmed and managed to perform certain functions peculiar to the organization that bought it. Most computers were used to crunch numbers for military or scientific purposes. As with many new inventions that have vast potential, those who first used them didn’t look far past their own immediate needs, which were demanding and remarkable enough, like calculating the arc through the upper atmosphere of a newly launched missile, or working out the variable paths of subatomic particles in a physics experiment. Computers were very good at solving large, otherwise time-consuming calculations very quickly, thus enabling all kinds of amazing technological feats, not the least of which was to steer six teams of astronauts to the surface of the moon and back.

  Most thinkers were busy with all of the immediate miracles computers had made suddenly doable; only those at the farthest speculative frontiers were pondering the machines’ broader possibilities. The scientists at ARPA, J. C. R. Licklider and Bob Taylor and Larry Roberts, as described in Where Wizards Stay Up Late, by Katie Hafner and Matthew Lyon, were convinced that the computer might someday be the ultimate aid to human intelligence, that it might someday be, in a sense, perched on mankind’s shoulder making instant connections that few would have the knowledge, experience, or recall to make on their own, connecting minds around the world in real time, providing instant analysis of concepts that in the past might require years of painstaking research. The first idea was just to share data between labs, but it was only a short leap from sharing data to sharing resources: in other words, enabling a researcher at one lab to tap into the special capabilities and libraries of a computer at a distant one. Why reinvent a program on your own mainframe when it was already up and running elsewhere? The necessary first step in this direction would be linkage. A way had to be found to knit the independent islands of computers at universities and research centers into a functional whole.

  There was resistance. Some of those operating mainframes, feeling privileged and proprietary and comfortably self-contained, saw little or no advantage in sharing them. For one thing, competition for computing time in the big labs was already keen. Why invite more competition from remote locations? Since each mainframe spoke its own language, and many were made by competing companies, how much time and effort and precious computing power would it take to enable smooth communication? The first major conceptual breakthrough was the idea of building separate computers just to resolve these issues. Called Interface Message Processors (IMPs), they grew out of an idea floated by Washington University professor Wesley Clark in 1967: instead of asking each computer operator to design protocols for sending and receiving data to every other computer on the net, why not build a subnet just to manage the traffic? That way each host computer would need to learn only one language, that of the IMP. And the IMPs would manage the routing and translating problems. This idea even dangled before each lab the prospect of a new mainframe to play with at no extra cost, since the government was footing the bill. It turned an imposition into a gift. By the early 1970s, there were dozens of IMPs scattered around the country, a subnet, if you will, managing traffic on the ARPANET. As it happens, the first two computers linked in this way were a Scientific Data Systems (SDS) 940 model in Menlo Park, and an older model, SDS Sigma-7, at UCLA. That was in October 1969. Phil Porras was just out of diapers.

  The ARPANET’s designers had imagined resource- and data-sharing as its primary purpose, and a greatly simplified way to coordinate the agency’s scattered projects, but as the authors of new life-forms have always discovered, from God Almighty to Dr. Frankenstein, the creature immediately had ideas of its own. From its earliest days, the Internet was more than the sum of its parts. The first hugely successful unforeseen application became email, the ability to send messages instantly anywhere in the world, followed closely by message lists, or forums that linked in real time those with a shared interest, no matter where they were. Message lists or chat lines were created for disciplines serious and not so serious—the medieval game “Du
ngeons and Dragons” was a popular early topic. By the mid-1970s, at about the time minicomputers were first being marketed as build-it-yourself kits (attracting the attention of Harvard undergrad nerds Bill Gates and Paul Allen), the ARPANET had created something new and unforeseen: in the words of Hafner and Lyon, “a community of equals, many of whom had never met each other yet who carried on as if they had known each other all of their lives . . . perhaps the first virtual community.”

  This precursor web relied on telephone lines to carry information, but in short order computers were being linked by radio (the ALOHANET in Hawaii connected computers on four islands in this way) and increasingly by satellite (the quickest way to connect computers on different continents). Pulling together this rapidly growing variety of networks meant going back to the idea of the IMP: creating a new subnet to facilitate linkage—call it a sub-subnet, or a network of networks. Computer scientists Vint Cerf of Stanford and Bob Kahn of MIT presented a paper in 1974 outlining a new method for moving data between these disparate systems, called Transmission Control Protocol, or TCP. It was another eureka moment. It enabled any computer network established anywhere in the world to plug into the growing international system, no matter how it transmitted data.

  All of this was happening years before most people had ever seen an actual computer. For its first twenty years, the Internet remained the exclusive preserve of computer scientists and experts at military and intelligence centers, but it was becoming increasingly clear to them that the tool had broader application. Today it serves more than two billion users around the world, and has increasingly become the technological backbone of modern life.

  Its growth has been bottom-up, in that beyond ad hoc efforts to shape its technical undergirding, no central authority has dictated its structure or imposed rules or guidelines for its use. This has generated a great deal of excitement among social theorists. The assignment of domain names and IP Addresses was handed off by SRI in 1998 to the closest thing the Internet has to a governing body, the International Corporation for Assigned Names and Numbers (ICANN). Headquartered in Marina Del Rey, California, ICANN is strictly nonprofit and serves little more than a clerical role, but, as we shall see, is capable of exerting important moral authority in a crisis. Domain names are the names (sometimes just numbers) that a user selects to represent his presence on the Internet—yahoo.com; nytimes.com, etc.. Many domains also establish a website, a “page” or visible representation of the domain’s owner, be it an individual, a corporation, an agency, an institution, or whatever. Not all domains establish websites. The physical architecture of the Internet rests on thirteen root servers, labeled A, B, C . . . through M. Ten of these are in the United States, and one each in Great Britain, Japan, and Sweden.* The root servers maintain very large computers to direct the constant flow of data worldwide. The root servers also maintain comprehensive and dynamic lists of domain-name servers, which keep the flow moving in the right direction, from nanosecond to nanosecond.

  The system works more like an organism than any traditional notion of a machine. The best effort at a visual illustration was created by researchers at Bar-Ilan University in Israel, who produced a gorgeous image that resembles nothing so much as a single cell. It shows a dense glowing orange nucleus of eighty or so central nodes surrounded by a diffuse protoplasmic periphery of widely scattered yellow-and-green specks representing isolated smaller nodes, encircled by a dense blue-and-purple outer wall or membrane of directly linked, peer-to-peer networks. The bright hot colors indicate high-traffic links, like root servers or large academic, government, or corporate networks; the cooler blues and purples of the outer membrane suggest the low-traffic networks of local Internet Service Providers (ISPs) or companies. There is something deeply suggestive in this map, reminiscent of what Douglas Hofstadter called a “strange loop” in his classic work, Gödel, Escher, Bach, the notion that a complex system tends toward self-reference, and inevitably rises toward consciousness. It is possible, gazing at this remarkable picture of the working Internet, to imagine it growing, multiplying, diversifying, and some day, in some epochal instant, blinking and turning and looking up, becoming alive. Planetary consciousness. The global I.

  The Internet is not about to wink at us just yet, but it helps explain some of the reverence felt by those engaged in conceptualizing, building, and maintaining the thing. It represents something entirely new in human history, and is the single most remarkable technological achievement of our age. Scientists discovered the great advantage of sharing lab results and ideas instantaneously with others in their field all over the world, and grew excited about the possibilities of tying large networks together to perform unprecedented research. Social theorists awoke to the thing’s potential, and a new vision of a techno utopia was born. All human knowledge at everyone’s fingertips! Ideas shared, critiqued, tested, and improved! Events in the most remote corners of the world experienced everywhere simultaneously! The web would be a repository for all human knowledge, a global marketplace for products and ideas, a forum for anything that required interaction, from delicate international diplomacy to working out complex differential equations to buying office supplies—and it would be entirely free of regulation and control! Governments would be powerless to censor information. Journalism and publishing and research would no longer be in the hands of a wealthy few. Secrets would be impossible to keep! The Internet promised a truly global egalitarian age. That was the idea, anyway. The international and unstructured nature of the thing was vital to these early Internet idealists. If knowledge is power, then power at long last would reside where it belonged, with the people, all people! Tyrants and oligarchs would tremble! Bureaucracy would be streamlined! Barriers between nation-states and cultures would crumble! Humankind would at last be . . . !

  . . . you get the picture.

  Some of this was undeniable. Few innovations have taken root so fast internationally, and few have evolved in such an unfettered, democratic way. The Internet has made everyone, in a virtual sense, a citizen of the world, a development that has already had profound consequences for millions, and is sure to have more. But in their early excitement, the architects of the Internet may have overvalued its anarchic essence. When the civilian Internet began taking shape, mostly connecting university labs to one another, the only users were people who understood computers and computer languages. Techno-utopia! Everyone can play! Information for free! Complete transparency! No one wrote rules for the net; instead, people floated “Requests for Comment.” Ideas for keeping the thing working smoothly were kicked around by everyone until a consensus arose, and given the extreme flexibility of software, anything adopted could readily be changed. Nobody was actually in charge. This openness and lack of any centralized control is both a strength and a weakness. If no one is ultimately responsible for the Internet, then how do you police and defend it? Unless everyone using the thing is well-intentioned, it is vulnerable to attack, and can be used as easily for harm as for good.

  Even though it has become a part of daily life, the Internet itself remains a cloudy idea to most people. It’s nebulous in a deeper way than previous leaps in home technology. Take the radio. Nobody knew how that worked, but you could picture invisible waves of electromagnetic particles arriving from the distance like the surf, distant voices carried forth on waves from the edges of the earth and amplified for your ears. If you lived in a valley or the shadow of a big building, the mountains or the walls got in the way of the waves; if you lived too far from the source of the signal, then the waves just petered out. You got static, or no sound. A fellow could understand that much. Or TV . . . well, nobody understood that, except that it was like the damn radio only the waves, the invisible waves, were more complex, see, and hence delivered pictures, too, and the sorting mechanism in the box, the transistors or vacuum tubes or some such, projected those pictures inside the tube. In either case you needed antennae to pick up the waves and vibrate just so. There was something going on the
re you could picture, even if falsely. But the Internet is just there. It is all around us, like the old idea of luminiferous ether. No antenna. No waves—at least, none of the kind readily understood. And it contains not just a voice or picture, but . . . the whole world and everything in it: pictures, sounds, text, movies, maps, art, propaganda, music, news, games, mail, whole national libraries, books, magazines, newspapers, sex (in varieties from enticing to ghastly), along with close-up pictures of Mars and Jupiter, your long-forgotten great-aunt Margaret, the menu at your local Thai restaurant, everything you ever heard of and plenty you had not ever dreamed about, all of it just waiting to be plucked out of thin air.

  Behind his array of three monitors in Menlo Park, Phil Porras occupies a desk in the very birthplace of this marvel, and sees it not in some vague sense, but as something very real, comprehensible, and alarmingly fragile. By design, a portion of the virtual ranch he surveys is left unfenced and undefended. It is thus an inviting target for every free-roaming strain of malware trolling cyberspace. This is his petri dish, or honeynet. Inside the very large computer he gets to play with, Porras creates a network of “virtual computers.” These are not physical machines, just individual operating systems within the large computer that mimic the functions of distinct, small ones. Each has its own IP address. So Phil can set up the equivalent of a computer network that exists entirely within the confines of his digital ranch. These days if you leave any computer linked to the Internet unprotected, you can just sit back and watch it “get popped” or “get pwned,” in the parlance. (The unpronounceable coinage “pwned” was an example of puckish hacker humor: geeks are notoriously bad spellers, and someone early on in the malware wars had typed “p” instead of “o” in typing out the word “owned.” It stuck.) If you own an Internet space as wide as SRI’s, you can watch your virtual computers get pwned every few minutes.