Tangled in the Threads
Jon Udell, December 24, 2001Dartmouth's Security Think Tank
A Visit to the Institute for Security Technology Studies
A few months ago I learned that my home state of New Hampshire boasts a national center for cyber-terrorism research: the Institute for Security Technology Studies at Dartmouth College. Last week I visited the ISTS to learn about its mission and research agenda. Founded two years ago, and primarily supported by the National Institute of Justice, the ISTS is chartered to:
Understand risks to information infrastructure
Assess law enforcement needs
Create better tools and training for intrusion detection and response
Develop technology that addresses mid- to long-term threats
Collaborate with counter-terrorism and cyber-security researchers nationwide
Provide public education on infrastructure risk and protection
In the eyes of the general public, the urgency of the ISTS' mission was thrown into sharp relief on September 11. Soon after, on the 22nd, it published Cyber Attacks During the War on Terrorism: A Predictive Analysis. In that report, Michael Vatis -- who is the director of the ISTS, and was founder and director of the National Infrastructure Protection Center -- noted that there is a strong correlation between political conflict and cyber attacks. When a U.S. surveillance plane collided with a Chinese fighter jet in April 2001, for example, "conflict between the two major powers was accompanied by an online campaign of mutual cyber attacks and website defacements, with both sides receiving significant support from hackers around the globe." Four days later, in testimony to the House Committee on Government Reform, Vatis warned that "information systems associated with critical infrastructures (such as banking and financial institutions, voice communications systems, electrical power supplies, water resources, and oil and gas delivery systems) must be considered a likely target for terrorists, nation-states, and anti-U.S. hackers," and called for a "Manhattan project for counterterrorism technology."
The ISTS researchers I met with are painfully aware that, while 9/11 may have temporarily heightened both the risk and the awareness of risk, there's no way to maintain a state of high alert indefinitely. Garry Davis, manager of the Cybersecurity Research Group (CRG) at the ISTS, acknowledges the dilemma. "We know the risk to infrastructure is high," he says. But in trying to focus attention on the problem, there is a real danger of becoming the boy who cried wolf. Buffer overflows, e-mail worms, Microsoft Windows vulnerabilities, denial-of-service attacks, and CGI exploits are everyday occurrences tracked by, among other sources, Security in the News, a daily summary sponsored by the Investigative Research for Infrastructure Assurance group (IRIA) of the ISTS. "We know a lot about individual exploits and incidents," says Eric Goetz, a research associate who edits the news service, "but it's the big picture that's the problem."
What, for example, is the full extent of vulnerability to distributed denial-of-service attacks? At its peak, according to Garry Davis, Code Red II's intense scanning caused sporadic slowdowns and outages. (My own DSL connection was affected, and I had to reboot my Cisco 675 repeatedly until I upgraded the router.) So far these outbreaks have occurred serially, but what would happen if a coordinated group of unrelated DDoS attacks were all launched at once? To answer such questions, ISTS researchers are using software to model and simulate large-scale networks. David Nicol, director of the CRG, is one of the principal architects of the Scalable Simulation Framework (SSF), an open standard for discrete-event simulation of large-scale communication networks. There are various implementations of SSF. Dartmouth's implementation, DaSSF, is open-source software, written in C++, with support for shared-memory or distributed-memory configurations. In a 1999 paper the architects of SSF wrote that "we now stand on the threshold of being able to model the Internet at realistic scales and levels of detail." Such a capability is sorely needed because, as the ISTS researchers I met with admit, we simply don't know what the effects of a large-scale coordinated attack might be. The commercial market, which has begun to deliver effective tactical solutions (improved firewalls, NATs, anti-virus software, etc.), is not motivated to tackle this kind of long-term strategic issue. It's hard enough, notes Garry Davis, to extract from vendors the configuration information and protocol details required to do effective simulations. So I'm happy to report that tax dollars are supporting research and development that the market, alone, would not.
Tools of the trade
Long-term research notwithstanding, a number of ISTS projects do address more immediate tactical needs. One researcher, Bill Stearns, is working to reduce the administrative complexity of running a honeynet -- that is, a network that is designed to be compromised, and to capture the methods and tools used by attackers. As distinct from a honeypot, which is a single machine assigned to this role, a honeynet looks like (indeed, is) a complete production network. As such, the honeynet can capture a wide range of malicious behavior. However, it's a major challenge to deploy, and effectively manage, a honeynet. The Honeynet Project, run by a non-profit group that has pioneered the use of honeynets for security research, has identified virtual honeynets -- easier to deploy, more efficient at data collection -- as a key next-generation technology. For Bill Stearns at ISTS, the virtualization tool of choice is User-Mode Linux. A UML-based honeynet contained within a single box could become an easily-distributable plug-and-play solution. And, Stearns notes, the architecture of UML -- which represents complete guest filesystems as single (if large) files on the host -- greatly simplifies the collection and transmission of forensic evidence.
Analysis of that evidence, however it's collected, is a daunting task. It takes extraordinary knowledge and skill even to approach the problem competently. Success requires, in addition, almost superhuman effort. The Honeynet Project researchers use an 80:1 metric to quantify the problem -- that is, for every 1/2 hour of attacker activity, 40 hours of analysis is needed to unravel it. "We're trying to drive that ratio down to 1:1," says ISTS researcher Brett Tofel. As a former system administrator, Tofel well understands the iterative process: query the logs, generate a hypothesis, use the hypothesis to formulate a new query, and repeat. He's working to develop software that will both accelerate that generate-and-test cycle and, equally crucial, make the procedure more broadly available to less expert law-enforcement personnel.
I wondered, as an aside, how ISTS researchers regard the expertise of law enforcement in matters of cyber security. In general, they admit, it's a game of catch-up, although forensic tools -- notably EnCase -- have recently helped even the odds. I was encouraged, though, by the ISTS assessment of security expertise in the elite ranks of law enforcement. "Part of our mission was to find the experts," says researcher Paul Gagnon, "and we found out there a lot more of them than you might think."
A honeynet is, of course, a pure research tool. It does not, itself, protect anything. There's also a pressing need to improve the intrusion detection systems that guard live networks. To that end, ISTS researchers are exploring pattern analysis techniques that can make IDS technology more flexible. High-fidelity logging, notes chief research engineer Trey Gannon, is an expensive proposition. It would be useful to have an early warning system that could see trouble in the making (e.g., a slow scan) and automatically jack up logging to levels not normally feasible.
Identity and trust
Issues of identity and trust lie at the heart of cybersecurity. As readers of this column know, I'm a longtime proponent of digital IDs. I've often alluded to the asymmetry that exists when -- as part of the SSL protocol -- we authenticate servers to clients, while failing to authenticate clients to servers. But do we even really know who the servers are? Unfortunately we don't, says Sean Smith, a Dartmouth computer scientist affiliated with ISTS. Inspired by the web spoofing research by Princeton's Ed Felten, Smith's team set out to show that an SSL connection to a server can be spoofed. Their demonstration, while not completely seamless, is clearly good enough to fool a great many casual users.
For starters, says ISTS researcher Ed Feustel, we've got to improve the ways in which browsers interact with users who are making trust judgements. A team of ISTS programmers is exploring how to do that, using Mozilla as a testbed. More generally, Feustel and Smith are interested in how to empower software, as well as people, to make PKI-based trust judgements. They're also interested in applications of secure coprocessors. Before joining ISTS, Smith worked at IBM where he helped create and validate the IBM 4578, a tamper-proof general-purpose computer (on a PCI card) with hardware support for cryptographic operations. As Smith and Feustel point out, spoofing isn't the only problem we face. Connecting securely to a legimate host does no good if the host has been subverted. So they're exploring ways to involve trusted co-servers in secure transactions. Secure coprocessors, using certificates that encode rich sets of attributes, could assure that all parties to a secure transaction comply with security policy.
Winning the arms race
It's heartening to see that the government's anti-cyber-terrorism agenda is being advanced at centers such as ISTS. At the end of the day, of course, the responsibility falls on all of us cyber-citizens to implement best practices: up-to-date systems and applications, strong passwords, anti-virus software, intrusion detection and firewalling. A lot of progress has been made in these areas, but the truth is that it's still way too hard for average users and system administrators to fully and competently protect themselves. And the cost of failure is yet unknown. The Honeynet Project measures the "life expectancy" of a newly-connected system exposed directly to the Net -- that is, its mean time until hacked -- in hours, not days. Vandalism is the most visible, but in many ways the least serious, outcome. More ominous is recruitment of such a system as a covert spy, or as a zombie waiting for the order to strike at other systems.
The market can and should do much more to help us. Default installations of all operating systems should be paranoid and minimal, not trusting and complete. Intrusion detection should be built in, not bolted on. For broadband users, device-level firewalling should be standard rather than optional. The reality, in most cases, is that any security measure that isn't automatic will simply be foregone. So let's send a clear message to the market: basic security must be automatic, a no-brainer, foolproof and unavoidable.
Long-term, it's an ongoing arms race. The market, on its own, has no stomach for such competition. Dedicated volunteers like those at the Honeynet Project have taken up the cause, but with limited resources. We expect goverment to protect critical infrastructure, and it has promised to do so. I'm glad to see that, on the issue of cyber-terrorism, the government has started to put some of our money where its mouth is.
Jon Udell (http://udell.roninhouse.com/) was BYTE Magazine's executive editor for new media, the architect of the original www.byte.com, and author of BYTE's Web Project column. He is the author of Practical Internet Groupware, from O'Reilly and Associates. Jon now works as an independent Web/Internet consultant. His recent BYTE.com columns are archived at http://www.byte.com/tangled/
This work is licensed under a Creative Commons License.