Gary McGraw ( http://www.rstcorp.com/~gem) is a research scientist at Reliable Software Technologies (RST). He holds a dual Ph.D. in Cognitive Science and Computer Science from Indiana University and a B.A. in Philosophy from the University of Virginia. Dr. McGraw is a noted speaker, consultant, and author on Java security. He recently completed a book, "Java Security: Hostile Applets, Holes, & Antidotes" (John Wiley and Sons, 1997), with Professor Ed Felten of Princeton University. Besides his books, Dr. McGraw's research in Cognitive Science and Software Engineering has resulted in over thirty-five technical publications. Dr. McGraw's is principle investigator on grants from the National Science Foundation and Defense Advanced Research Projects Agency (DARPA). Electronic mail: firstname.lastname@example.org
FM: Several United States government servers, notably the Justice Department, have been compromised over the past few months. What do you think might be the problem?
Gary McGraw: One problem is that the press usually blows these things all out of proportion. The CIA site hack, for example, only compromised a sacrificial lamb machine (albeit a very public one). The CIA PR folks run that site. PR folks are generally not too security savvy, even CIA PR folks. I'm not so sure about the DOJ hack. People infer, incorrectly, that because some Web facade was compromised that it's just as easy to hack the "real" machines. I doubt this is the case.
On the other side of the coin, whenever one of these prominent government sites gets hacked, it's major egg on the face of the agency or department involved. Departments that are worried about public perception should do all they can to protect their Web servers. So here's the rub...if they are doing all they can and they aren't any good at it, then what? Why not burn all the pages into a CD-ROM that hangs off the server? That would at least make the content a bit less amenable to "change." Sites that are perpetually updating their pages might not like this suggestion.
Another problem with the hacked sites is that it is fairly easy for a hacker to add pages with dangerous content that might go on to attack people who surf over to the hacked sites. The hacker might use a hostile applet to implement a port scan (as demonstrated by a couple of English hackers last month), or perhaps they might implement one of the recent highly-publicized attacks against the Internet Explorer. Or they might install a sniffer. The problem is, once a site is compromised, it can be used to attack other machines.
The good thing about the press coverage, regardless of its hyperbole level, is that it makes people think about security. People feel generally safe when they surf the Web - for no apparent reason. They think Web browsing is anonymous. (It isn't). They think all Web sites are safe. (They aren't.) The Web, though it is safe for the most part, has dangerous areas on it, just like there are bad neighborhoods in the real world. I'm careful not to drive my Toyota minivan into bad neighborhoods in Washington, D. C. I'm also careful about pointing my browser (which has all sorts of fun things like Java built-in) to certain iffy sites.
FM: A public domain program, Crackerjack, gave system administrators a way to test passwords. Unfortunately, one version of Crackerjack also supplied a list of accounts and passwords to unauthorized individuals. Is this the basic risk in using a public domain application?
Gary McGraw: I hadn't heard about this one. Sounds like either somebody misused the tool and appended sensitive data, or somebody re-wrote the tool as a Trojan Horse that exported password information off site. The moral of the story is, be careful what you download. Try to find utilities that have been MD5'ed or otherwise signed (thumbprinted) by somebody you trust. Be aware that Trojan Horses are a common hacking tool.
My point of view on security tools like Satan, Crack, and ISS is that they will end up improving computer security in the long run. I think utilities that aid in improving site security are good. I believe in co-evolution. More power to the Dan Farmers of the world! The real problem in computer security is not that easy-to-use hacking tools are readily available, it's that the "penetrate-and-patch'' mentality once again prevails. The same old bugs keep showing up over and over. Software gets released on a ridiculously short production schedule, gets installed out in the world, and then ends up being debugged by users. Users and hackers find holes, vendors patch holes, sometimes users install patches. If we really want to solve many of today's most prevalent security problems on the Net and Web, we need to focus on stomping out these bugs during development. One of RST's DARPA-sponsored research projects is looking into this problem.
FM: Some have called intelligent agents just new forms of viruses. What do you think?
Gary McGraw: Lets assume an intelligent agent is really a process that runs remotely on someone else's machine. Sounds a lot like executable content to me. Just like other problems in computer security, the main idea is to assess your risk and then manage it appropriately. Agents are likely to do useful things in the world. That's a good thing. But with agent use (and hosting) comes some risk. Is it worth it to allow an agent to run on your machine? Will agents turn out to be useful? All this cool new functionality, like Java, has nice benefits. I like the idea of interacting with a program off the Web. I like the idea of agents at my beck and call out there looking for stuff I need. The catch is that there are piles of new risks in our highly-interconnected network. Agents add new risks of their own. If I give an agent my credit card number and some limited buying power and send it off to get me movie tickets, how can I be sure that it won't be mugged in a host somewhere out there and left bruised and bleeding in somebody's RAM after giving away its secret information? That's a risk.
The key to all this is to weigh the benefits against the risks - in other words, the key is to manage the risks. Driving a car is risky, but it is a risk I am willing to take. Using the Internet is risky, but it is also a risk I am willing to take.
Another point is that viruses propagate and spread by infection.
(They may be even change themselves along the way.) Agents shouldn't need to behave like that. Agents that replicate like viruses and "flood'' the network would be a pretty dumb implementation.
FM: In October, 1996, a spam was sent from an America Online account, advertising the availability of child pornography videos and other materials. The person's account was compromised, allowing someone to forge this posting - in this case, a massive spam - causing a great deal of trouble. It seems to be getting easier to send mail from someone else's account. Or is it?
Gary McGraw: Lets digress a bit and talk about forging mail, which is clearly related (and, I should add, much easier than stealing somebody's account to spam from).
Forging mail has always been rather easy. One of the Internet "rites of passage'' is to telnet to port 25 and send some fake mail to a friend. This game is very well known. The scheme is easily debunked, though. Without doing something more than what I mentioned above, the sendmail daemon actually marks the "forged'' mail (in its header) with the IP number of the machine that connected to port 25. This makes it very easy to discern which machine actually sent the mail in most cases. Of course on big university and corporate machines with hundreds of users, tracking down the actual person who originated the telnet connection to port 25 may not be completely trivial. Dynamic IP addresses (something often used by independent Internet service providers) also make finding out just who forged mail hard.
It is easy to detect that mail has been forged by looking carefully at the header and noticing that the machine that the mail says it is from in the "From" line differs from the machine cited in the "Received:" line. Most users (and mail readers like pine) look only at the "From" line, but systems people know to look at both. Here is an example of a piece of mail that I forged from my home machine (tigger.mediasoft.net) to my work account. Note how the "From" and "Received:" lines differ.
From email@example.com Wed Jul 24 19:33:56 1996
Received: from tigger.mediasoft.net by rstcorp.com (4.1/SMI-4.1)
id AA21199; Wed, 24 Jul 96 19:33:54 EDT
Received: from rstcorp.com (firstname.lastname@example.org[220.127.116.11])
by tigger.mediasoft.net (8.6.12/8.6.9) with SMTP id SAA00966
for email@example.com; Wed, 24 Jul 1996 18:30:31 -0400
Date: Wed, 24 Jul 1996 18:30:31 -0400
This is forged mail.
The AOL attack you cited did not involve "forged" mail. What apparently happened is that somebody's account was hacked and actual mail was sent from their account. It is much harder to do that. But I think the real question is: how easy is it to do dangerous fake mail attacks? Java applets provide an interesting new twist on the standard approach to mail forging.
Because applets load across the network and run on Web surfer's machine, a mail forging applet can cause the standard sendmail daemon that monitors port 25 to report that mail is coming from the Web surfer's machine (and not the machine where the applet originated). This fact can be leveraged to "doubly forge'' mail in the following way: Imagine that someone hits a Web page and an applet downloads over the Net and begins running on the client machine. By using the unsuspecting victim's machine to forge mail from the victim (that is, to forge mail that is apparently both from the victim's machine and from the victim's account), the "doubly forged'' mail will appear not to have been forged at all! With many standard configurations of sendmail, this forging attack is possible.
I think it is clear that we need a way to ensure that mail can't be forged. Authentication systems using encryption will probably be part of the solution. It is technically possible to "digitally sign" e-mail so that the receiver can tell for certain who the author is. Some people already sign all of their messages. This will eventually lead to separate streams of mail, "trusted mail" that can be authenticated and "untrusted mail" that may or may not be telling the truth about who sent it. If mail signing becomes the norm, it will be much easier to stop mail abuse through forgery.
FM: Do you think that the mathematics behind most cryptographic schemes is fundamentally simple? Or not?
Gary McGraw: No. The math is complicated, but it seems to work well. The main problem with many of the best encryption schemes today is key management. You can have the most mathematically-sound encryption method possible, but if somebody leaves their private key laying around, all bets are off.
FM: Should I be worried about using a credit card to buy books via the Internet? Or to subscribe to this journal?
Gary McGraw: It depends on whether you are using a secure connection or not. The MIT Press Web site uses the SSL capability built in to today's best browsers, so I regularly order books from them over the Web. I would never send my credit card number across the Net unencrypted though. It's too easy to write a little packet sniffer that looks for credit-card-like numbers. SSL encryption is fairly solid (timing hacks aside), but European users of SSL-enabled browsers should be aware that their encryption schemes have been dumbed down to meet U. S. export restrictions.
SSL only ensures that the traffic between your browser and the site you are connecting to is encrypted. The merchant at the end will decrypt the information for use in purchasing. Hopefully they are trustworthy! But that problem aside, vendors often store sensitive information in unsecure databases in plaintext. These databases may be accessible via the Web. Who cares how it gets there. It's not safe when it's there. Secure payment protocols like the Cybercash model avoid this problem of sending around your credit card number. SET standardizes these sorts of secure payment protocols.
Currently, Internet traffic is made up of packets of information (IP) with unencrypted data portions. A reasonable analogy likens this sort of Internet traffic to postcards. IP packets include a sending address (which is spoofable, but that's another story), a target address, and some data. In the near future, IP will probably use some sort of encryption for authentication and coded messaging. That will make sending around the postcards a little less worrisome. To push the analogy a bit, it will be more like sending around messages in envelopes with seals that can only be opened by the intended recipient.
FM: In your new book "Java Security" (Wiley, 1997), you bemoan the lack of logs in Java. Without logs, it's difficult to prove anything. Are logs just part of the answer?
Gary McGraw: Yes, but an important one. They are a universal capability that computer security experts rely on, no matter what the platform involved or what the program being run. Often the only way to reconstruct an intrusion is by carefully and painstakingly reading log files. This provides three benefits: 1) it allows the victim to figure out what damage was done, 2) it provides clues about how to thwart similar future attacks, and 3) it provides evidence for possible legal proceedings against the intruder.
Java has no logging capability at the current time. It is not possible to figure out either which applets were loaded and run or what those applets might have done. Things that should obviously be logged include (at the very least), file system and network access. It would also be good to capture applet byte code for analysis in case an applet ends up doing something hostile. It is often easier to recover from an intrusion if you know what caused it and what happened during the event.
In the book, we show how an applet can delay its processing until a later time of its choosing. Given that applets can do this, logging becomes even more important. An especially crafty hostile applet can wait until some other Web site becomes the main suspect before doing its dirty work. It won't surprise me if the most malicious applets in the future turn out to be the craftiest.
One of the lessons emphasized in the book "Takedown" (by Shimashura) is that without a log file it is impossible to prosecute computer criminals. This means that if your site were hit by an attack applet today that erased or stole critical information - and you even knew who perpetrated the attack - you couldn't do anything about it. Applet logging is an essential security feature that should be made available immediately. Care must be taken that applets can't edit these log files, otherwise they could cover their tracks.
FM: Most Web browsers keep all sorts of information in their caches. What kinds of risks do these caches pose?
Gary McGraw: Here's an example. One of the Java security holes we talk about in Chapter 3, "Serious Holes in the Security Model", is called the "Slash and Burn" attack. In March 1996, David Hopwood at Oxford University found a flaw that allowed an attack tricking Java into treating the attacker's applet as trusted (and made use of the cache in the process). This flaw allowed full system penetration. It affected Netscape Navigator 2.01 and was fixed in Netscape Navigator 2.02. This was back in the days before Microsoft Internet Explorer supported Java.
For this attack, if a bad guy wants to pass off a piece of code as trusted, two steps must be carried out: 1) getting the malicious code onto the victim's disk, and then 2) tricking the victim's browser into loading it.
Perhaps the most effective way to inject code is to take advantage of the browser's cache. Most Web browsers keep on-disk copies of files that they have recently accessed. This allows repeated accesses to the same web documents to be satisfied more quickly. Unfortunately, it also gives a malicious applet a way to get a file onto the victim's machine. The applet could load the file across the Net, pretending that it was an image or a sound-file. Once this was done, the file would be on the victim's disk in the cache. If the applet knew how the browser organized its cache, it would know where on the victim's disk the file resided.
Once the file is on the victim's disk, the second part of the attack involves tricking the victim's browser into loading the file. This is supposed to be impossible in Java. Since the browser only looks up class names relative to the user's current directory, the attacker would have to place a file into the victim's directories, something that is not possible. But a bug in the implementation allowed this to occur.
One reason that this attack was possible (besides the more obvious bug that allowed the loading, which we focus on in the book) is that code loaded from the cache was partially trusted by Java. That's because it came off the local disk. The rules that allowed code from the cache to be trusted were changed without much public fanfare during the move to 2.02. Classes from the local disk are no longer trusted unless they are in the CLASSPATH.
So the "Slash and Burn" attack in its original guise was very powerful because code off the disk was granted too much power. We talk about the different levels of trust afforded to Java class files in Chapter 2, "The Java Security Model.'' Just recently, another Java attack involving the cache was announced. Major Malfunction and Ben Laurie, two English hackers, wrote an attack that works only against the Microsoft Internet Explorer. They claim "this loophole allows an attacker to connect to any TCP/IP port on the client's machine." That's a bit of an overstatement, but interesting information about listening ports can be gathered (for possible later use). This may leave a firewalled host more susceptible to standard TCP/IP based attacks. That's bad.
The Java Security Manager usually disallows port scanning behavior. But the crackers use the well-known trick of sticking some Java code in the browser's cache and later executing it through a file: URL (using frames). Since Microsoft's cache layout is transparent, this attack works. The attackers cheat a bit for demonstration purposes by having the patsy clear their cache to begin with, but even without this exercise, guessing the cache location (one of four possibilities) would not be all that much of a challenge.
The applet the crackers stuff in your cache is a port scanner. In this case, the port scanning attack works because an applet is allowed to open a socket connection back to where it came from. Guess where it came from? The client machine. So a port scan is carried out by their cache-bomb applet. Port scanning is very bad since a cracker might be able to discover things like weak sendmails listening on port 25.
All these direct attacks aside, caches provide lots of information about where you have been surfing. If you're concerned about your Web privacy, you want to make sure your cache contents are not read by someone else.
If you are a Netscape user and you wonder what might be sitting in your cache, you can use the "about:cache" command on the Location line and find out. Cache-ing is a good idea, because it saves time when you're loading frequently accessed pages. (As an example, many Web sites re-use key images throughout the site. If you cache the image once, there is no need to go fetch it again and again.) But cache-ing also provides an information source about where you have been surfing and what you might have looked at along the way.
FM: What sort of limitations do you see with packet filters?
Gary McGraw: Firewalls use filtering to disallow certain IP packets (our famous postcards) from entering a protected site. The idea is that you might allow only limited services, say telnet and http while disallowing everything else. In general, the major limitation I see with packet filtering is that it is very easy to mis-configure the "rules" that a filter applies to traffic. If filters are incorrectly configured, then users may not be getting the level of protection that they think they're getting. Packet filters generally make up a firewall for a site. It is important not to consider a firewall a complete security solution. Firewalls are a useful tool, but it is important to have other security mechanisms in place as well. An explicitly-stated security policy is a good place to start. There's a joke in the security community that refers to sites that rely solely on their firewall for protection (and ignore Intranet security entirely) as "crunchy on the outside and chewy in the middle." There are a couple of other limitations worth mentioning. Packet filters don't do much to stop data-driven attacks. For example, a valid HTTP request may get through the filter and go on to do something malicious. Web sites generally don't want to stop Web hits, so they let most requests through. Another problem is that packet filtering is more of a reactive solution than a proactive solution (in its most common usage). You can block traffic from a known bad guy, but only after you know who the bad guy is. Of course, filters that block incoming traffic from sites deemed unacceptable are susceptible to IP spoofing.
Firewalls are good, but they are not a "silver bullet" for security.
FM: Bruce Schneier wrote in the second edition of his book "Applied Cryptography" and I quote:
"The moral is that you can never really trust a piece of software if you cannot trust the hardware it is running on. For most people, this kind of paranoia is unwarranted. For some, it is very real." Is this paranoia real or imaginary?
Gary McGraw: It is real. On machines with multiple users, or machines on which untrusted external code is running, it is important to protect users from one another (and processes from one another). This is only truly possible with special hardware.
But as Schneier points out (in his outstanding tome), most people don't need to be quite so paranoid. Governments with state secrets and Defense Departments worry about these things, but the usual Web surfer probably doesn't have to...yet. This may change as electronic commerce emerges. Once you involve money, and you do things like store money on personal machines, the whole game will change. Recall the Chaos Computer Club's famed ActiveX attack on the German version of Intuit's Quicken. They set things up so an unsuspecting Web (and Quicken) user would transfer money to the CCC account. I predict that's a taste of things to come.
The reason Internet and Web security is becoming a hot topic is because more people than ever before are using the Net and the Web. Even my 89-year-old grandmother just got WebTV so she could be part of the McGraw family e-mail fest! Plus, business are falling all over themselves to get on the Net.
As businesses go online and Web commerce evolves, security will become even more critical than it is today. In the Information Age, Information is the currency. Unscrupulous people have always wanted to steal things. The new thing to steal is electronic blips.
The other thing about the Web is how rapidly it is changing. The Web only started in 1993! Here we are four years later with Java, ActiveX, Plugins, and now server-push executables. These systems are new, and still have some kinks to be worked out. Plus they all interact in often unpredictable ways. Some of us are working as hard as we can to assess these systems and inform people about their risks in an objective way. That's why Ed Felten and I wrote the Java security book, and that's why we like to do interviews like these.
I would like to close with a little poem penned (or rather keyed-in) for us by Peter Neumann. I think is provides a great summary of my point-of-view about Java.
The Java Security Web Site:
Copyright © 1997, First Monday
FM Interviews: Gary McGraw
First Monday, volume 2, number 4 (April 1997),