Anatomy of Malice
For anyone worried about viruses and worms, perhaps the best advice is Know Thy Enemy
One moment an executive is working on an e-mail to an important client. The next, her PC has been converted into an expensive paperweight, paralyzed by a piece of malicious software.
From New York to New Delhi, this scenario is all too familiar. Nor do infections cause only local damage. Increasingly, computers are being attacked by software that enables remote intruders to gain access or enlist computers as hapless foot soldiers in an information war.
The perils of such enlistment hit the headlines last year when sites like eBay and CNN were brought low by a battalion of 75 computers flooding targets with junk data and blocking access by legitimate users. The attacker was a Canadian teenager, who had to hack into each computer individually. But autonomous, self-replicating software could create not a battalion, but an army, and wreak havoc on the communal infrastructure of the Internet.
The Usual Suspects
Malicious software can be classified into three groups: viruses, Trojans, and worms. These divisions reflect how the software infects its target and might replicate after infection. How a virus or worm affects a computer depends on the payload it carries. The payload is the portion of the virus that can spell the difference between a minor irritation and complete disaster for computer users and administrators. But even a virus with a benign or no payload may do harm by using up computer resources such as network capacity.
A virus hides and replicates itself in a computer’s file system. To trigger an infection, the virus must be in a piece of software that is executed by the system. Many viruses soon copy themselves into essential system files, making them hard to remove. Typically, viruses spread from system to system as software is exchanged between users, but the use of Trojans and worms to deliver viruses is also common. Once executed, most viruses take up residence in the computer’s memory and try to infect other programs.
Like the wooden horse of legend, Trojans work by pretending to be something they are not, in order to bypass defenses. Masquerading as a useful or amusing piece of software, they can carry a dangerous payload that executes on the target computer with all the privileges of the user that ran the Trojan program. Writing a Trojan requires no more effort than writing any normal piece of software. It does not reproduce itself and so cannot spread throughout a file system or across a network. It relies upon somehow convincing individual users to run it as a trusted piece of software, a tactic that normally precludes epidemics. This limitation is not always a drawback to someone trying tobreak into a computer system; a Trojan program that is apparently well behaved and that draws little attention to itself can be used by a would-be intruder to monitor a network or provide a backdoor into a computer system at a later date.
A worm is a piece of software that propagates itself across computer networks. Unlike Trojans and viruses, it can get itself executed on a target system without human intervention. It gets into a system by exploiting bugs or overlooked features in commonly used network software already running on the target. A worm can exist purely in memory, never existing in a file, making itinvisible to file-scanning antivirus software.
Fear of just such a disaster fueled the urgent warnings that accompanied the recent outbreak of the Code Red worm. The target—the White House Web server—dodged the attack, but the aftershocks are still being felt. In fact, sampling nearly any Internet traffic stream reveals Code Red-like probes by copycat software looking for vulnerable computers to infect.
As in controlling the spread of real diseases, the key to effective defenses is to understand the cause and mechanism of infection, not to focus on the symptoms. A computer virus that erases a user’s files may seem very different from one that merely prints out the occasional annoying message, but chances are, they both got into his or her system in a similar fashion.
Evolution of a sickness
Malicious software falls, by and large, into three classes: Trojans, viruses, and worms [see sidebar, "The Usual Suspects"]. The first to appear were the Trojans, which date back to the early 1970s. Their existence prompted Fred Cohen, then a graduate student at the University of Southern California in Los Angeles, to begin experimenting with hostile and defensive software in 1983. Cohen read about the various Trojan horse programs being found in user directories on timesharing systems, and as he remembers it, “I realized that if a program was [not only] a Trojan but also reproduced itself, it would spread from program to program and user to user, acting like a disease.” Now a practitioner in residence in the computer forensics program at the University of New Haven, in Connecticut, Cohen is credited with having coined the term computer virus.
By 1986, the first virus, Brain, which would be widely transmitted among PC users, had been created in Pakistan. It eventually found its way to the United States, triggering an outbreak at the University of Delaware, in Newark, in October 1987. Although the virus did little damage, it marked the end of an age of innocence.
In 1988, another landmark event occurred: the first Internet worm. At its peak the Morris worm infected some 6000 hosts, or 10 percent of the nascent Internet. Attacking on several fronts, the worm exploited bugs in software on the target systems and tried to guess obvious user passwords. Ultimately, it was a victim of its own success. Because it was poor at determining whether or not a system was already infected, targets were soon infected with multiple copies of the worm running simultaneously. As the copies scanned for new targets, the resulting exponential increase in the load on individual computers and network connections tipped off system administrators.
The counterattack takes hold
In response to the Morris worm incident, the U.S. Defense Advanced Research Projects Agency (Darpa), Fort Lee, Va., set up the Computer Emergency Response Team. The group is now known as the CERT Coordination Center and is based at Carnegie Mellon University, in Pittsburgh. “It was decided that there needed to be an organization that could coordinate responses to events like this,” explained Marty Lindner, team leader for incident handling at CERT.
Antivirus software companies sprang up too. One well-known vendor is Symantec, headquartered in Cupertino, Calif. As senior director of the company’s Security Response office in Santa Monica, Calif., Vincent Weafer recalled how his staff watched viruses evolve. “Probably the biggest technology leap that occurred was the introduction of macro viruses” in the mid-1990s, Weafer said. A macro is a package of instructions used to automate tasks in large applications, such as the Microsoft Office suite, which provide so-called script engines to create and run macros.
If the application has been ported to several different platforms, the script engine ensures that the same macro will run on those platforms. Previously, differences between platforms meant that viruses could not cross the computer version of the species barrier and infect, say, both PC and Macintosh computers.
Script engines removed that barrier, and worse, provided a high-level language environment for virus writers. “All of a sudden,” Weafer said, “we went from [virus writers] who had to understand assembly...and low-level code, to people who could write viruses in macro [languages]....We saw an explosion of macro viruses as a lot of people, not necessarily equipped with a great deal of knowledge, started to get involved.”
Among those unsophisticated users was a new type of computer vandal called a script kiddie. With such a script a (typically) relatively unskilled adolescent can create his or her own viruses and worms with virus-writing tools created by others.
Then, in 1998, a new type of virus appeared that combined some features from all three classes, viruses, Trojans, and worms. These were the mass mailers that arrived at computers attached to e-mail. Melissa was the first big one. “Suddenly we had global epidemics in a matter of days, not months or weeks, as we used to,” said Weafer. According to CERT, it took three days for Melissa to infect over 100 000 computers, compared to the months it took for Brain to infect a few thousand computers 10 years previously.
Under the skin
A detailed look at Melissa demonstrates just how viruses in general get into a system, replicate, and deliver their payloads. Melissa targeted the Microsoft Office software suite, probably because of its widespread availability and its tight integration of such components as a word processor and an e-mail client.
Melissa’s first appearance was on 26 March 1999 in the alt.sex newsgroup, lurking in a posted Microsoft Word document that contained a list of user names and passwords for a variety of pornographic Web sites.
The virus as in a macro called Document_Open, which, as the name suggests, is executed when the document is opened— if macros are permitted to run. Although given a pop-up warning by Microsoft Word against permitting macros to execute, users caught in the first wave were sufficiently intrigued by the content to ignore the warning—a perfect example of a Trojan attack.
The virus’s first act was to disable the macro security tools. These tools allow users to block macros from running and receive warnings about the presence of macros in a document file.
As a worm might do, Melissa then opened the user’s Microsoft Outlook e-mail address book and mailed the infected document, along with the virus, to the first 50 names in each address list. Cleverly, it also composed a subject line for these e-mails that read “Important Message From,” followed by the infected user’s name, also from Outlook. The body of the e-mail was set to “Here is that document you asked for...don’t show anyone else.” This convinced recipients that the document was from a trusted source, so they, too, ignored the initial warning against enabling macros.
Melissa then moved to its viral stage, attempting to infect other Word documents. First, it invaded Word’s default template, copying itself into the Document_Close macro. The default template contains various settings used by Word when creating and editing documents.
Subsequently, when a Word document was closed by the user, the Document_Close macro executed, triggering Melissa, which copied itself into its original hiding place in a Document_Open macro. This meant Melissa was not confined to its original Trojan horse of a list of pornographic Web sites. Instead, it could hitch a ride on documents legitimately mailed between users, virtually guaranteeing that it would be part of a trusted e-mail and that the initial macro warning would be ignored.
Melissa then tidied up after itself, making sure the infected document was properly saved, so that the user would be unaware that anything was amiss. Finally the payload was executed. If the minute of the hour equaled the calendar day (say, 1:21 on 21 March), a quote from the animated TV show, “The Simpsons,” was printed on the user’s screen.
Although the payload was only a distraction, Melissa did considerable damage by clogging mail servers and provoking their shutdown. CERT reports that one site alone received 32 000 e-mail copies of Melissa in 45 minutes. Many system administrators chose to turn off their mail servers rather than attempt to weather the storm. “The real financial impact came when people took their e-mail servers off-line, with the following loss of productivity,” said Symantec’s Weafer.
Melissa’s creator learned from the mistakes of the past. Unlike the Morris worm, Melissa carefully checks to see if the machine has already been infected, by looking in the Windows registry for an entry called “...by Kwyjibo” (Kwyjibo is another reference to “The Simpsons”). If the machine has not been infected, it adds this entry to the registry and mails itself out. If the machine has already been infected, it refrains from mailing itself and just sets about infecting documents. By limiting itself in this way, Melissa reduced, at least initially, its chances of being detected as the Morris worm was.
Many of the macro viruses that came later, including the Love Bug with its infamous subject line of “I LOVE YOU,” followed the same pattern. Employing a clever bit of social engineering to gain users’ trust allowed the virus to spread rapidly.
Burrowing through the Internet
A worm, however, must use a different approach. A virus like Melissa, before it can propagate itself, requires a human to move it forward, said CERT’s Lindner. A worm “is actively seeking out more machines to infect, and each machine that it infects starts the same vicious cycle. No human has to get involved,” he explained.
Many computers on the Internet run several different programs, such as Web and telnet servers that listen to network traffic. When a program is processing network traffic, it is said to be providing a service. It can also be a doorway for worms.
A common way for a worm to use a service to infect a computer is through a buffer overflow. Code Red used a so-called buffer overflow exploit to attack computers running Microsoft’s IIS Web server.
After data is passed to it from the network to a particular service, such as the text of an e-mail to a mail server, the data is often held temporarily by the service in a memory space called a buffer.
If the size of the data being transferred to the buffer is larger than the space allocated to the buffer, the computer keeps writing the overflowing data into unexpected areas of memory. In certain circumstances, if this overflowing data represents valid computer instructions, the operating system will execute those instructions, infecting the computer.
Creating a buffer overflow exploit requires a detailed knowledge of how the target’s operating system and software work, as well as a familiarity with low-level programming. But once created, such an attack can be automated.
Buffer overflow exploits can be guarded against by having the service program check to see that the data will fit into the buffer before transferring it. Few compilers do this checking automatically, so the check must be coded manually. Often, at the time the program is created, the programmer is focusing on getting the software’s primary function (say, a Web or e-mail server) to work, rather than thinking about security. “The problem is, we have created a generation of computer programmers who don’t understand anything about protection,” said the University of New Haven’s Cohen.
CERT’s Lindner wants vendors to make higher quality software. As well as thinking about security early in the products development, “if the vendors spent more time testing their code, validating their code, before it went to market, then there wouldn’t be these vulnerabilities, and system administrators wouldn’t have to spend most of their time patching,” he said.
Another drawback, Cohen feels, is that software manufacturers allow time-to-market considerations to outweigh quality control. “Because they have so little liability, the only real test is: if it doesn’t crash so often that the users refuse it, you’re O.K.,” he said. Most commercial software requires users to accept a license that in other industries would be considered a joke. For Microsoft’s Office XP Resource Kit, for instance, the license reads that no warranty is given for “fitness for a particular purpose, of accuracy or completeness of responses, of results, of workmanlike effort, of lack of viruses, and of lack of negligence.”
That the creators of the information infrastructure have been able to use these licenses incenses Cohen, who believes “there’s an implied warranty of sale that cannot be waived. If people started suing these companies, they might find that they would win, regardless of what the license says.”
What to do?
Is using an alternative to proprietary software a way to minimize the risk of infection? Open Source software (OSS) prides itself on being immune to issues such as time-to-market because the software is developed by a collection of individual programmers, not a company. “Not only do [Open Source developers] produce more robust and higher quality software, they also do audits of the software and find [vulnerabilities] more quickly and fix them more quickly,” said Cohen. “The time to repair a vulnerability is typically 24-48 hours, and a notice is sent out immediately. There are many cases where commercial products don’t upgrade these things for many months.”
But Symantec’s Weafer calls this “a religious debate.” If the software is proprietary, less is known about its vulnerabilities, “and the script kiddies can’t exploit it. If you do discuss it, you come up with fixes faster. There’s no perfect answer and I think both viewpoints will be around for a long time.”
Lindner also doesn’t see OSS as a magic bullet. “Everything has vulnerabilities. The real question from the bad guy’s point of view is, ‘What is it worth to me? Where do I get my bigger bang for the buck when it comes to looking for exploits?’ I get it in the bigger numbers...the target is the thing with the bigger installed base, which just happens to be Microsoft.”
No defense is perfect, and since so many viruses appear all the time, how can users and system administrators manage their risk? The first thing to realize is that “new types of viruses are not appearing all the time; [once] we know how a given type of virus attacks computers, we can defend against it,” said virus commentator Rob Rosenberger, the editor of Vmyths.com, a Web site dedicated to debunking both virus hoaxes and antivirus vendor hyperbole.
But, without spending all day scouring the antivirus and hacker Web sites, is there any way for users and administrators to get a warning of a new type of malicious software? Fortunately, there is a pattern. While there is no guarantee that a new type of virus won’t appear that is both virulent and destructive, viruses and worms often show up first in proof-of-concept form—they spread and replicate using some new technique. Bearing either no payload or a harmless one, they rarely make the headlines, but will be noted in technology-focused news media. The trouble starts a few weeks or months later, when someone adds a destructive payload to the original virus.
As CERT’s Lindner explained: “Every vulnerability has a cycle that seems to have survived the test of time. It takes someone in the know to create the first exploit...eventually that exploit gets in the wild. And people refine it and make better tools. And eventually, it gets to the level where they’re published on the Internet and you have all these script kiddies, who have no idea of what they’re really doing—someone else has done the thinking for them, all they have to do is click.”
Sensible user policies also help, but they should be implemented in software as much as possible. Telling users not to “ever open an attachment is ridiculous,” said Cohen. Rather, he believes a better approach would be a filter in the mail server that blocks executable files and detects macro viruses.
In the long run, then, probably the best defense against many viruses is simply, as Cohen puts it, “some rational business decisions by people in authority in the organization, and the decision by [software] manufacturers to provide the means by which those policies can be reasonably enforced.”