|
|
|
|
CISE - Funding
OVERVIEW OF NSF FY04 CYBER TRUST AND RELATED CAREER AND ITR AWARDS
Computing, communications, and storage resources worldwide continued to grow at
exponential rates in 2004. Unfortunately, computer security incidents continued to show
parallel growth patterns. As the use of the Internet for commercial purposes continues to
grow, so do the opportunities for its abuse by criminals, for example through theft of
sensitive information via "phishing" attacks.
Today's problems call for research of many kinds, both to understand better the
complex system of systems on which society depends and to lay the foundations for a world in
which today's attacks can perhaps be ruled out by design. In its 2004 awards, NSF's Cyber
Trust program seeks to advance relevant research on many fronts, to educate students in how
to design and build more trustworthy systems, and to inform the public about safe ways to
use the systems on which they depend. This note summarizes 35 Cyber Trust awards, 5 related
FY04 ITR awards, and 10 related CAREER awards.
Most research projects have several dimensions, such as the expected time to yield
results, where the project lies on scales ranging from empirical to theoretical work, from
foundational to applied, and across domains and disciplines of study. Any attempt to group
them into categories will consequently succeed better for some than for others. The
framework in which the projects are described below is intended to help readers relate
projects to each other and to provide an overall picture of the program, but these
categories were not provided by the principal investigators. The project descriptions below
are based on the proposals and the award abstracts.
The categories used here are as follows:
A. Security of next generation OS and networking issues
B. Forensic and law enforcement foundations
C. Human computer interface for security functions
D. Cross-disciplinary approaches
E. Theoretical foundations and mechanisms for privacy, security, trust
F. Composable systems and policies
G. Presenting security concepts to the average user
H. Improved ability to certify system security properties
I. Improved ability to analyze security designs, build systems correctly
J. More effective system monitoring, anomaly detection, attack recognition and defense
K. Integrating hardware and software for security
Other categorizations of the awards are provided at the end of this
document.
A. Research on Next Generation OS and Networking Issues
Operating Systems:
1. OS Support for Application Installation, Execution, and Management in an
Untrustworthy World. Steven Gribble and Henry Levy, University of Washington.
Unfortunately, modern operating systems do little to help users address the security and
vulnerability challenges of the modern networked environment. For example, it is difficult
to determine what programs are installed or running on a system, or what code is responsible
for generating visible activity (such as network traffic, file system activity, or windowing
activity). It is even more difficult for users to install or remove code from a system
cleanly or completely. The proposed research seeks to address these shortcomings of modern
operating systems by reconsidering the OS architecture in light of the modern world. In
particular, a goal is to provide new program installation, execution, and management
abstractions within the OS.
NSF Award #0430477
2. Securing Untrusted Software with Interposition. David Mazieres, NYU,
Frans Kaashoek and Robert Morris, MIT, Edward Kohler, UCLA. Current operating systems have
vulnerabilities precisely because it is so difficult to write secure programs for them. A
new operating system design, Asbestos, is proposed that allows one to secure applications
without fully understanding them. The fundamental Asbestos security primitive is
interposition. One or more programs can easily interpose upon, monitor, and control any and
all interactions between an application and the rest of the system. Interposers correspond
to security policies. They can block unwanted accesses or even virtualize parts of the
system, so that legacy applications that demand inappropriately high privilege can run in a
less-privileged setting. Interposers themselves need have no more privilege than the
applications on which they interpose. Network firewalls make a good analogy: they secure
network interactions between applications they don't fully understand by controlling their
communication. Asbestos interposers are like per-application firewalls. Asbestos aims to
make secure programming easier and more accessible.
NSF Award #0430425
3. SecureCore for Trustworthy Commodity Computing and Communications.
Ruby Lee and Mung Chiang, Princeton, Cynthia Irvine, Naval Postgraduate School, Terry
Benzel, USC-ISI. The SecureCore project will design and develop a secure integrated core for
trustworthy operation of mobile computing devices consisting of: a security-aware
general-purpose processor, a small security kernel and a small set of essential secure
communications protocols. The research will use a clean slate approach to investigate a
minimal set of architectural features required for such a secure core, for use in platforms
exemplified by pocket devices (e.g., contact-less smart card), secure embedded systems
(e.g., computer in a heart monitor), and mobile computing devices (e.g., handheld
web-enabled computer). The goal is to achieve security without compromising performance,
size, cost or energy consumption.
NSF Award #0430487
Networking
1. Privacy and Surveillance In Wireless Systems. Dirk Grunwald and Greg
Grudic, University of Colorado, Boulder. This research will investigate how privacy may be
provided at the physical layer in future wireless networks. A dialectic study of location
privacy in wireless networks will be conducted -- a "good cop" vs. "bad cop" exploration of
the technical limits and abilities for surveillance in common data networking. Two paths
of research will be pursued: mechanisms for wireless privacy will be investigated, and those
techniques will be subjected to the statistical extrapolations made possible by machine
learning. From these measurements, the research will address both location privacy -- the
hiding of information in a location-aware wireless system, and surveillance -- the
observation of actors in wireless networks.
NSF Award #0430593
2. Trustworthy and Resilient Location Discovery in Wireless Sensor
Networks. Peng Ning, North Carolina State U., Wenliang Du, Syracuse U. This
research will investigate a suite of techniques to prevent, detect, or survive malicious
attacks against location discovery in sensor networks. The effort will study key management
schemes suitable for authenticating beacon messages, explore techniques to make existing
location discovery schemes more resilient, seek beaconless location discovery that uses
deployment knowledge instead of beacon nodes, and investigate methods to integrate
techniques cost-effectively for sensor network applications. In this way, sensor network
applications can be developed with inherent, built-in security.
NSF Award #0430223
3. CAREER: Secure and Resilient Sensor Network Communication
Infrastructure. Adrian Perrig, Carnegie Mellon U. This research studies the
problem of secure and attack-resilient communication in sensor networks. The result of this
research is an easy-to-use secure communication infrastructure. This secure communication
infrastructure will enable sensor network designers to construct secure and attack-resilient
networks, without requiring security experts. The resulting sensor network will provide data
secrecy (provide robustness against eavesdropping) and data authenticity (prevent message
injection), provide resistance against malicious resource consumption attacks, and provide
resistance against physical sensor node capture and compromise.
NSF Award #0347807
4. Controlling Internet Denial of Service with Capabilities. David
Wetherall and Thomas Anderson, University of Washington. This research addresses Internet
denial of service problems at the level of internet architecture. If successful, this
change can act as a solid foundation for higher-level network services; patches cannot. The
key question is whether it is possible to design a scalable, heterogeneous, and open network
that resists Denial of Service attacks. The approach targets the root cause of flooding
attacks: that any node is able to send packets to any destination at any time. In the
proposed architecture, destinations (and paths) are given control over the network resources
used to reach them. Capabilities are the mechanism that is used to affect this control.
Senders must first obtain “permission to send” from the destination; a receiver provides
tokens, or limited use capabilities, to those senders whose traffic it agrees to accept.
The senders then include these tokens in packets. This enables verification points
distributed around the network to check that traffic is certified as legitimate by both
endpoints, and to cleanly discard unauthorized traffic.
NSF Award #0430304
5. Real-Time Internet Routing Anomaly Detection and Mitigation.
Zhuoqing Mao, University of Michigan. This research will address the detection and
prevention of security problems of the Internet routing infrastructure. A distributed
routing Intrusion Detection System (Router IDS) will be developed for performing real-time
Internet routing anomaly detection and mitigation to improve the robustness of the routing
infrastructure. Router IDS detects routing anomalies by combining publicly available routing
data from multiple vantage points to check consistency and identify deviations from past
routing behavior. It disambiguates uncertainties by correlating routing data with both
passively collected traffic data as well as actively triggered light-weight probe packets
targeting destinations relevant to the routing updates observed. Proactive measures are
undertaken to mitigate anomalies by preventing changes to the local router's forwarding
table from being changed or polluted. Routing policies are modified or the suspicious
routing updates are filtered to avoid going through the incorrect route. If there are
alternate routes to a destination, a suspicious routing update can be excluded as part of
the route selection process as a safety precaution. If there is no access to the router,
overlay routing is used to bypass the router using the incorrect route.
NSF Award #0430204
6. (ITR) Large-Scale Network Simulation for Security and Survivability
Evaluation. George Riley (Ga Tech). The Internet routing infrastructure and
domain name infrastructure are huge and highly dynamic systems. This research is focused
on the detailed study and analysis of the behavior of these systems, using newly developed
high-performance simulation tools capable of modeling networks consisting of hundreds of
thousands or millions of network elements. A large part of the effort will be focused on
survivability analysis for these critical Internet infrastructure services, in the presence
of either deliberate or accidental failures or mis-configurations. If this work succeeds,
the Internet will become more resilient to failures and more dependable for end users.
NSF Award #0427700
Storage
1. Design and Implementation of Hydra: A Platform for Survivable and Secure
Storage Systems. Lihao Xu, Washington University. Supporting the availability,
survivability, persistence, confidentiality and integrity of information is becoming more
and more crucial. This calls for secure and reliable data storage systems that distribute
information over networks, enabling users to store and access critical data in a
continuously available and highly trustable fashion. The goal of this project is to design
and implement a general platform for data storage systems to meet such objectives. What
makes this project novel and exciting is the use of MDS array codes designed and developed
in previous related projects, and our novel use of an MDS code to address both error
detection/correction and encryption.
NSF Award #0430224
B. Forensic and Law Enforcement Foundations
1. ForNet: Design and Implementation of a Network Forensics System.
Nasir Memon, Hervé Brönnimann, Douglas Salane, Adina Schwartz, and Joel Wein, Polytechnic
University, Brooklyn, New York. This project addresses the lack of effective forensic tools for network level
investigation of malicious activity on the Internet and other IP networks. Future networks
are expected to have forensic support integrated into them to deter cybercrimes. The main
goal of this research is to develop tools and techniques to realize this vision and build a
proof of concept prototype network forensics system. The resulting Forensic Network will
support more thorough investigations; provide evidence of probable cause for law enforcement
to obtain warrants; and in some cases supply proof for use in civil and criminal trials.
Synopsized network traffic will be captured to answer the following kinds of questions:
- Where are the zombies that participated in a DDOS attack? When were they planted?
What system or network planted them? Where are additional zombies that have not yet been
activated?
- Was the defendant's computer hacked? When? What system did the hacker come from? Did the
hacker commit the crime the defendant is accused of? Was a zombie planted? When?
- Has a specific confidential file been sent through or from our intranet? When? Where did
it go? Did the request to send the document come over the intranet? From where?
NSF Award #0430444
2. Blind Detection of Digital Photograph Tampering. Shih-Fu Chang, Ravi
Ramamoorthi, Columbia University. The ability to ensure information integrity throughout
the information distribution chain is one of the core requirements for building a
trustworthy cyber environment. This research will develop a completely blind and passive
detection system, i.e., no extra encryption, signature extraction, or information embedding
processes are needed. Given a digital image in question at the point of checking, the goal
is to detect content tampering operations by analyzing the natural signal/scene
characteristics in the image without incurring any other overhead. Recent advances in
signal processing, computer graphics, and statistical machine learning have brought about
great potential for breakthroughs in the above mentioned direction. This project initiates
a collaborative effort combining cross-disciplinary expertise in signal processing and
computer graphics. A multi-level system for detecting image forgeries at both the low-level
signal and high-level scene is proposed. At the signal level, image authenticity is defined
as the signal quality of original images, which comes directly from imaging devices such as
cameras and scanners. While at the scene level, image authenticity is defined as the
quality of a consistent 3-D scene. The signal processing approach involves innovative use
of higher-order signal statistics, image decomposition, and image structural analysis to
identify the image-splicing effect, which is a direct evidence of photomontaging. The
computer graphics approach includes novel techniques of 3D geometry estimation,
illumination field recovery, and scene reconstruction to detect inconsistency at the scene
level like shadows, shading, and geometry. The signal level and the scene level detection
form a powerful combination - research combining both directions presents a great
opportunity for innovation but it has not been explored to date. The goal is to discover
fundamental possibilities and limits of the new paradigm.
NSF Award #0430258
3. Center for Internet Epidemiology and Defenses. Stefan Savage,
Geoffrey M. Voelker, George Varghese, Univ. California San Diego, and Vern Paxson, Nick
Weaver, International Computer Science Institute, Berkeley CA. Understanding the scope
and emergent behavior of Internet-scale worms seen in the wild constitutes an emerging
new science termed Internet epidemiology. This center-scale activity award includes a
significant focus on large-scale network forensics including analysis of victims and
attribution of attacks. Key tools in this pursuit are the Center's construction and
operation of a distributed network telescope of unprecedented scale that in turn feeds
a honeyfarm collection of vulnerable "honeypot" servers whose infection serves to indicate
the presence of an Internet-scale worm.
NSF Award #0433668
4. Using generative models to evaluate and strengthen biometrically enhanced
systems. Fabian Monrose, Johns Hopkins University, Daniel Lopresti, Lehigh
University, Michael Reiter, Carnegie Mellon University. This research will investigate the
difficulty of synthesizing voice or handwriting characteristics used in generating
passwords hardened with biometric data. The import for forensics is that it will help
establish the effort an attacker might incur in attempting to masquerade as a different
user.
NSF Award #0430338
C. Human-Computer-Interfaces for security functions
1. Exploring Risk Perception And Ultimate Trust in Online Environments:
Viewpoints of the Visually-Impaired. Juline Mills, Purdue.
This research addresses system interfaces for establishing trust from the perspective of
a visually impaired user. The study will specifically seek to: 1) identify the salient
factors important in developing visually-impaired consumer trust in online businesses; 2)
identify the interpersonal trust factors and the corresponding levels of interpersonal
trust critical to the adaptation or continued use of the Internet by visually-impaired
consumers; 3) explore variations in trust antecedents based on the demographic
characteristics (age, gender, and ethnic and cultural background) of visually-impaired
consumers; 4) develop a trust typology model for visually-impaired consumers;
5) develop educational programs to aid non-users and users with increasing trust online.
NSF Award #0430406
2. (ITR) Panoply: Enabling Safe Ubiquitous Computing Environments.
Leonard Kleinrock, Gerald Popek, Peter Reiher, UCLA. This research will investigate a new
concept of access control intended to be more understandable and convenient for a broad
range of users. The main concept underlying access control in Panoply is based on "spheres
of influence." These spheres divide the ubiquitous computing world into logical and
physical groups, and the research will carefully define the ways in which devices interact
with each other and move between spheres. The concepts will be tested in an art musuem,
where mobile computing devices will be used by children and others.
NSF Award #0427748
D. Cross-disciplinary approaches
1. An Economic Approach to Security. Joan Feigenbaum, Dirk Bergemann,
Yale University, Scott Shenker, International Computer Science Institute. This is a
three-year, multi-institutional, multi-disciplinary research project on the economics of
security in networked environments. Specific research topics to be pursued include
security of interdomain routing, adoptability of trusted platforms, compatibility of
"host security" mechanisms (such as OS file-protection systems) and "network security"
mechanisms (such as firewalls), the tension between universal access to a subnetwork
and security of that subnetwork's assets (and the sensitivity of this question to
subnetwork size), and markets for private information. The goal of the proposed activity
is to broaden and deepen the nature of security research so that it (1) includes full
consideration of adoption incentives and other relevant economic issues and (2) fully
integrates multiple subdisciplines of CS research.
NSF Award #0428422
2. Network Security Begins at Home: Changing Consumer Behavior for
i-Safety. Robert LaRose, Michigan State University. This project extends online
privacy research to develop a theoretical model of online safety behavior, evaluates and
tests that model in the context of current security interventions, and develops and tests
a consumer online safety tool. The term i-Safety connotes information safety and also the
role that all individuals play in preserving it. The model synthesizes theories of human
behavior and theories of consumer information processing. Specifically, it examines the
relationships among safety involvement, knowledge of online safety hazards, the expected
outcomes of safe and unsafe online behavior, self-efficacy beliefs in one.s abilities to
avoid risk and to take preventive actions, and social norms about online behavior, the
performance of both risky and preventive behavior, and the formation of safe online habits.
Phase 1 will extend and validate a theoretical model of online safety behavior through a
national panel survey of 1000 Internet users using structural equation modeling techniques.
In Phase 2 panel members will be recontacted to participate in experimental studies of
the effects of online safety interventions on consumer information processing and safety
behavior. An online safety auditing application will be developed to measure effectiveness.
In Phase 3 the safety auditing application will be expanded into a personalized safety
assessment tool that will facilitate personalized online safety instruction, and the
effectiveness of the application will be evaluated.
NSF Award #0430318
3. Defense from Cyber-Attack Using Deception. Neil Rowe, Naval
Postgraduate School. Deception is a key feature of human social interaction not much
studied in either information security or artificial intelligence. This research aims to
develop testable computational models of deception, including the major sub-phenomena of
trust, expectation, suspicion, surprise, deception plans, and manufactured patterns. Such
a theory can be used to explain both offensive deceptions (to gain some advantage) and
defensive deceptions (to foil someone else's plans). This theory will be used to build and
test deceptive software for a second line of defense for computer systems against attack.
NSF Award #0429411
4. (see Bertino/Anton project, E.1 below)
E. Theoretical foundations for concepts such as privacy or availability
1. A Comprehensive Policy-Driven Framework for Online Privacy Protection:
Integrating IT, Human, Legal and Economic Perspectives. Elisa Bertino, Melissa
Dark, Ninghui Li, Robert Proctor, Victor Raskin, Purdue University, Ana Anton and Ting Yu,
North Carolina State University. This three-year, multi-institution, multi-disciplinary
research project will provide a comprehensive framework for protecting online privacy,
covering the entire privacy policy life cycle. It will advance the state of the art in
methods and techniques dealing with privacy policy specification, deployment, enforcement
and communication. Expected results include: (1) An expressive language for specifying
privacy policies that has an intuitive and precise semantics based on a rich ontological
resource. (2) A comprehensive framework for authoring, enforcing, and auditing privacy
policies. This includes access control theory and tools to enforce privacy policies in
distributed information systems (e.g., databases), theory and tools for policy analysis,
tools for authoring policies, as well as information flow control theory based on privacy
policies. (3) Methodologies and tools for empowering users with control of their own
privacy, through user-friendly and ontology-based interfaces. (4) Methodologies and tools
for evaluating today's privacy practices as well as studying the economic and legal
factors in online privacy protection. Trustworthy privacy protection can only be attained
when broad consideration is given not only to IT (information technology) solutions, but
also to a wide range of perspectives from other disciplines. To this end, the project will
systematically integrate privacy-relevant human, legal and economic perspectives in the
framework.
NSF Award #0430274
2. Privacy-Aware Information Release Control. Sushil Jajodia and Claudio
Bettini, George Mason University, and Xiaoyang Wang, University of Vermont. The starting
point of this project is the realization that privacy concerns take different forms for
different data sets. In order to preserve the privacy of individuals, the privacy concerns
must be formalized. When data is released, whether used in privacy-preserving data mining
or simply published to the third party or the general public, these privacy rules need to
be satisfied. This is termed privacy aware information release control. Two general
approaches are adopted: query anonymization and online data checking. Query anonymization
means that all queries are to be evaluated to see how much privacy is disclosed through the
query. If the query discloses too much, some changes will be made so that the privacy level
will be maintained. Here, the technical challenge is how to ensure that the system will
release the maximum information but without any privacy violation. Online data checking
means that when data is released, privacy rules will be checked on the to-be-released
data to find any privacy violation. The technical challenge of online checking is its
efficiency. These two methods are complementary and can sometimes be used together in a
practical system. The above techniques are based on knowing the privacy level that the
data requester is allowed to have. Once data is released, depending on the level of private
data contained in the output, some obligations may be attached. This project also tackles
the problems related to management of such obligations.
NSF Award #0430402
3. Experiments in CyberSpace. Roy Maxion and Dan Siewiorek, Carnegie
Mellon University. The main objective of this research is to bring experimental and
analytic rigor to bear on a range of problems important to operations in cyberspace
(metrics, measurements, and evaluation), as well as to demonstrate the effectiveness of
that rigor in actually solving real problems and producing useful and broadly employed
tools in these same areas. The planned results of this project include:
- A publicly available set of well-vetted metrics for intrusion detection algorithms.
- A suite of tools for generating custom reference data sets to be used for testing the
operational range of detector algorithms.
- A set of gold-standard reference data sets that anyone can use for testing new or
revised detector algorithms, and that can be used for replicating the work and claims of
others on a common basis.
- A tool for running complex experiments to broadly evaluate the strengths and limitations
and operational effectiveness of detection algorithms for computer security.
- An introductory curriculum for rigorous experimental computer science.
NSF Award #0430474
4. Privacy-Protecting Mechanisms for Data Escrow and Transaction
Monitoring. Stanislaw Jarecki, UC-Irvine. Mechanisms that limit the privacy
threats posed by data collection and monitoring applications, while still enabling their
efficient operation, are the goal of this research. The conflict between privacy and
monitoring can only be resolved if the monitoring agency does not require unconstrained
access to the data. For example, if the agency only needs access to data that satisfy
some pre-defined suspicious patterns, then cryptographic mechanisms may enforce both the
correctness of the accessed data and the secrecy and anonymity of the data that do not
meet the searched-for patterns. The research will (1) identify patterns likely to generate
the most exciting and beneficial applications of data escrow and monitoring, (2) discover
cryptographic techniques that enable efficient privacy-protected monitoring for data
patterns within the identified classes, (3) search for practical solutions that admit, for
example, weakened measures of privacy or only probabilistic correctness and detectability
of the monitored patterns. Preliminary investigations identified the link between simple
privacy-protected data escrow applications and deterministic encryptions, unlinkable
signatures on ciphertexts, and fair two-party computation of probabilistic
functionalities.
NSF Award #0430622
5. (ITR) Privacy-Preserving Data Integration and Sharing. Christopher
Clifton, Ahmed Elmagarmid, Purdue U., Dan Suciu, U. Washington, AnHai Doan, U. Illinois,
Gunther Schadow, Regenstreif Institute for Healthcare. This project will develop the
technology needed to create and manage federated databases while controlling the disclosure
of private data. While the emphasis will be on general techniques for data integration that
preserve privacy, the project will work in the context of diverse but particularly relevant
problem domains, including scientific research and emergency preparedness. Domain experts
from these fields will participate in developing and testing the techniques. Research
thrusts will investigate (1) developing a privacy framework for data integration that is
both flexible and understandable to end users, (2) establishing semantic correspondence
between schemas while limiting disclosure, (3) querying across sources while preserving
privacy, and (4) object matching and consolidation without revealing data sources.
NSF Award #0428168
6. (ITR) The Design and Use of Digital Identities. Elisa Bertino, Howard
Sypher, Purdue University.
This project addresses a wide variety of digital identity needs by developing required
Flexible, Multiple and Dependable Digital Identity (FMDDI) technology, based on a sound
underlying set of definitions and principles. The FMDDI technology developed in the project
will support multiple forms of identity, including nyms, partial identities, and a variety
of user properties, credentials, and roles. Relevant research trusts in the project
include: identity schemes and representation formats; use of ontology and issues related
to identity interoperability; anonymity, dependability, accountability, and
forensic-friendly identification schemes; psychological and social aspects related to the
use of digital identities.
NSF Award #0428554
F. Create composable systems and policies
1. Integrating Security and Fault Tolerance in Distributed Systems.
Andrew Myers, Ken Birman, and Fred Schneider, Cornell University. This research focuses on
the construction of trustworthy distributed systems: systems that tolerate both malicious
attacks and benign faults while preserving data integrity and confidentiality. The
development of high-assurance systems has been dominated by work on two separate themes:
security and fault tolerance. Security dogma holds that a trustworthy system must be able
to defend against malicious attacks, building from a trusted computing base. Fault
tolerance dogma maintains that a trustworthy system cannot depend on any single component
functioning correctly, because that component becomes a vulnerability. These two views are
incompatible because a trusted computing base could become a single point of failure, and
because efficient fault-tolerant replication protocols assume nonmalicious failures. This
project will explore new ways to synthesize these two approaches. The goal is new methods
for constructing distributed systems that are trustworthy in the aggregate even when some
nodes in the system have been compromised by malicious attackers.
NSF Award #0430161
2. A Survivable Information Infrastructure for National Civilian
BioDefense. Yair Amir and Brian Coan, Johns Hopkins, Cristina Rita-Notaru,
Purdue, and Rafail Ostrovsky, UCLA. This research aims to develop the theoretical
foundation and the protocols that facilitate a survivable information infrastructure
that meets all the critical requirements of a national emergency response system. A
key part of the work will focus on a survivable messaging infrastructure that will
continue to function in an environment where some servers are compromised. The research
will attempt to identify general principles that will help construct other national
emergency systems sharing similar characteristics and requirements.
NSF Award #0430271
3. Byzantine Replication for Trustworthy Systems. Lorenzo Alvisi and
Michael Dahlin, University of Texas. In a world where economics dictates that few
components be rigorously tested or verified, methods for building trustworthy systems
from untrustworthy components are essential. An attractive approach toward managing the
complexity inherent in building trustworthy distributed systems consists in modeling a
compromised component as faulty according to the Byzantine failure model---the weakest
of all failure models, which allows faulty components to deviate arbitrarily and
maliciously from their correct specification. This research explores the feasibility
of this approach, both with respect to its fundamental assumptions and to its engineering
viability. On the first front, the focus is on (1) the challenge of conjugating
fault-tolerance and privacy by developing a firewall with formally verifiable privacy
guarantees and (2) the establishment of a solid, quantitative basis for measuring failure
independence of replicas against security attacks in Byzantine fault-tolerant
architectures. On the second front, the emphasis is on exploring novel ways to implement
Byzantine services that provide low latency, high throughput, and can be quickly and
unobtrusively reconfigured to improve their performance in response to changes in the
environment in which they operate. Addressing these issues successfully will enable a
strategy for assembling untrustworthy components to obtain trustworthy systems.
NSF Award #0430510
4. High Fidelity Methods for Security Protocols. John Mitchell,
Stanford, Andre Scedrov, Univ. of Pennsylvania, Vitaly Shmatikov, Univ. of Texas - Austin,
and Daniele Micciancio, Univ. of California - San Diego. This project focuses on three
topics concerning the design and security analysis of protocols that use cryptographic
primitives: foundations of protocol analysis, automated tools, and application of tools
and methods to selected protocols. Foundational work centers on relating and combining
two previously separate approaches: logical methods based on symbolic execution of
protocols, and computational methods involving probability and polynomial-time. The
symbolic approach uses a highly idealized representation of cryptographic primitives
and has been a successful basis for automated tools. Conversely, the computational
approach yields more insight into the strength and vulnerabilities of protocols, but is
difficult to apply and only accessible to a small number of true experts in the field.
Building on past success using several automated tools, the project will devise tool-based
methods that leverage new scientific foundations. Three likely application areas are
secure group communication and key agreement protocols, schemes for privacy-preserving
computations, and wireless networking and applications. In each of these areas, there is
current demand for new secure protocols from user communities, there is ongoing activity
in the research and standardization communities, and the value of combining symbolic and
computational analysis concepts is evident.
NSF Award #0430594
G. Present security concepts effectively to the average user
1. Secure Personalization: Building Trustworthy Recommender Systems.
Robin Burke and Bamshad Mobasher, DePaul University. Many ordinary users of e-commerce
systems today depend on the recommendations presented by a variety of ccommercial web
sites; this format of information is evidently accepted by a wide range of users. This
research will conduct a comprehensive study of the robustness of recommender systems in
the face of malicious attacks. The analysis will be multidimensional, examining the
effects of a range of attack types on a comprehensive set of recommendation algorithms
including hybrid approaches using different types of user profiles in diverse
recommendation domains. Formal models will be developed for the analysis of robustness
in recommender systems. Different attack types will be explored and modeled; attack
detection and countermeasures will be considered, and techniques for enhancing the
robustness of recommeder systems will be explored.
NSF Award #0430303
2. LaRose (D.2 above) NSF Award #0430318
3. Kleinrock (C.2 above) NSF Award #0427748
H. Improved ability to certify system security properties
1. Type Qualifiers for Software Security. Alexander Aiken, Stanford,
David Wagner, UC-Berkeley, Jeffrey Foster, U Maryland. This research has the goal of
establishing that very large software components are free of specific kinds of security
vulnerabilities. Thus, the focus is not just in finding security holes in software, but
in verifying their absence. The approach is based on static analysis, which by analyzing
source code can model all possible executions of a program. Previous experience has
shown that simple, approximate tools do not find all or even nearly all security
vulnerabilities; the higher assurance given by verification is needed. The experimental
goal is to apply these techniques to the Linux kernel, a security-critical application
with millions of lines of code.
NSF Award #0430378
2. Trusted Certification Tools. Warren Hunt and J Strother Moore, Univ.
of Texas - Austin. This research focuses on the goal of producing high assurance systems, in
which the buyer can specify the required software/hardware properties formally and the
untrusted vendor can provide with the delivered product a machine-checkable proof that the
product has the specified properties. Achieving the vision requires, providing
specification paradigms (so buyers can afford to specify what they want), making the task
of finding the first proof less labor intensive (so implementors can afford to prove their
claims), and making the proof tools trustworthy (so their flaws cannot be exploited). This
research aims to advance elements of all three in the context of the widely used ACL2
theorem proving system. The automation and scalability of ACL2 will be increased, a
demonstration of how ACL2 allows an untrusted contractor to deliver a trusted artifact will
be provided, and (c) methods for producing a trustworthy but industrial-strength
certification tool will be explored.
NSF Award #0429591
I. Improved ability to analyze security designs, build systems correctly
1. Cryptographic Foundations of Cyber Trust. Shafi Goldwasser, MIT.
The design of cryptographic protocols is a complex endeavor, which must be accompanied by
a security analysis which rests on sound theoretical foundations. This research will
address challenges that arise in the the design of cryptographic protocols at multiple
levels, from the mathematical underpinnings of computational difficulty, through modeling
and analysis of protocols, to deployment and run-time issues. The following objectives will
be pursued: diversifying cryptographic hardness assumptions; adequate modeling and analysis
of cryptographic protocols in complex environments; analyzing the security of current
practices; and designing new cryptographic protocols which achieve stronger levels of
security. The diversity of the challenges addressed will have a significant impact on
the design and practice of cryptographic protocols in the future.
NSF Award #0430450
2. New Complexity-theoretic techniques in cryptography. Salil Vadhan,
Harvard. Most of modern cryptography relies on computationally intractable problems, and
computational complexity theory is the study of such problems. The research focuses on
techniques from computational complexity that may help in addressing important open
problems in cryptography and security. The research is foundational in nature, yet it can
advance a variety of current applications in trustworthy computing, such as doing
cryptography with human-memorizable passwords, mitigating the effect of key exposure, and
extracting cryptographic keys from biometrics. New applications may include proving the
security of Fiat-Shamir-type digital signatures, doing public-key cryptography from one-way
functions with no trapdoor, and understanding the extent to which obfuscation,
watermarking, and homomorphic encryption are possible.
NSF Award #0430336
3. CAREER: Efficient Cryptographic Protocols for Secure and Private
Cryptographic Transactions. Anna Lysyanskaya, Brown U. Electronic transactions,
such as managing bank accounts or reading e-mail, require privacy and security. Since data
aggregation is simple to do, it is desirable to minimize the information transmitted in
each transaction without compromising its authenticity. Cryptographic schemes that make
this possible are the intended outcome of this research. This project investigates the
security requirements of the basic protocols that a system for secure and private
electronic transactions would comprise and develops efficient and provably secure digital
signature schemes and other primitives that lend themselves to the design of such
protocols.
NSF Award #0347661
4. CAREER: Strengthening Cryptography by Reducing Assumptions about the
Adversary. Tal Malkin, Columbia U. Cryptographic security models are defined in
terms of the capabilities of the adversary, including computational limitations and what
access is allowed to the system. The security of protocols is then proven with respect to
such adversaries. Proofs based on fewer and weaker assumptions about the adversary show
stronger protocols. This research will expand the traditional cryptographic foundations
so as to withstand attacks by stronger, more realistic adversaries. In particular, the
classical assumption that the adversary has no access whatsoever to the legitimate
parties' secret keys will be relaxed. The research will study the strongest existing
models, design new models, develop protocols, and explore the limits of what is possible
to achieve, for chosen ciphertext attacks, tampering attacks, and key exposure attacks.
NSF Award #0347839
5. Temporal Aspects. Alan Jeffrey, Radha Jagadeesan, James Riely,
DePaul U., Glenn Bruns, Lucent. This research investigates the use of aspect-oriented
programming language techniques as a safe means to update the functions and security
policies of a high-confidence computer system while the system is running. The
specification, implementation, and verification of secured components will be studied
in an aspect-oriented style. The addition of new software components will be modeled as
dynamic aspects that can modify software during its execution. Dynamic aspects introduce
the possibility for subtle bugs to be introduced in the interaction between conflicting
aspects. A similar problem (known as the Feature Interaction Problem) has been studied in
the telecommunications field. The experience and techniques from that area will be brought
to bear on security features modeled as aspects. Tools will be developed to support these
methods, and they will be applied in case studies.
NSF Award #0430175
6. CAREER: Language-based Distributed System Security. Stephan A.
Zdancewic, U. Pennsylvania. This research addresses the problem of building distributed
systems and reasoning about their security by developing programming languages that
provide better abstractions for describing security policies and communication protocols.
The project builds on existing work on security-typed languages, which protect data
confidentiality and integrity. This research generalizes these information-flow policies
to make them more dynamic, enabling them to accommodate the changing environment in which
distributed systems run. For example, dynamic information-flow policies should integrate
cleanly with traditional authentication and access control mechanisms to provide end-to-end
guarantees about data confidentiality. The second stage of this project is to develop
language technology that applies dynamic security policies to distributed programs. The
idea for this part of the research is to develop a theoretical basis for secure
distributed computing using existing process calculi augmented with a heterogeneous trust
model and the policy language outlined above. In this setting, types describe
communication protocols, properties of which can be verified statically by the compiler.
The result will be a programming language with a sound type system that aids the programmer
in writing correct, security-critical distributed programs.
NSF Award #0346939
7. CAREER: Programming Languages for Reliable and Secure Low-level
Systems. Michael Hicks, U. Maryland. This project aims to develop, implement,
apply, and evaluate programming language technologies to ensure the security and
reliability of low-level systems -- systems that require careful control over hardware
resources, such as operating systems, embedded systems, and communications systems. The
approach is to employ novel static analysis techniques, primarily type checking and
inference systems, for automatically checking proper usage of idioms common to low-level
software. These idioms include manual memory management, concurrency, and dynamic
reconfiguration; their incorrect usage can lead to service failures, data corruption, and
security exploits. For assessment, the new techniques will be incorporated into a new
C-like programming language called Cyclone, which is then used to build or port real
low-level software, including device drivers, network packet processors and servers, and
embedded control software. These systems are experimentally compared against
traditionally-developed systems to evaluate their flexibility, usability, and
performance.
NSF Award #0346989
8. CAREER: Type Systems for Secure Code Migration. James Riely, DePaul
U. Distributed systems increasingly rely on forms of code migration, such as client-side
scripting, downloaded plugins, application service providers, and networked class loading.
In executing migrating code, trust becomes an important issue: why should a host trust
some newly arrived code to run locally? And why should a migrating agent trust the host
where it is now running? One part of a trust architecture can be the use of type-checking:
a host trusts a newly arrived agent if it can type-check it. This project uses semantic
techniques to provide a formal basis for trust issues in distributed object-oriented
systems with code migration. The formal models are a basis for a prototype language
implementation that provides a secure infrastructure for distributed application
development. Both the problems of a host trusted potentially hostile mobile code and
mobile code trusting a potentially hostile host will be addressed. The problems are
formalized using type systems incorporating trust and models of encryption and digital
signatures in order to transmit trust across the network.
NSF Award #0347542
9. CAREER: The Test-Driven Development of Secure and Reliable Software
Applications. Laurie Williams, North Carolina State University. This research
will extend, validate, and disseminate a software development practice to aid in the
prevention of computer-related disasters. The practice is based upon test-driven
development (TDD), a software development technique with tight verification and validation
feedback loops. This project extends the TDD practice and provides a supportive open-source
tool for explicitly situating security as a primary attribute considered in these tight
feedback loops. Additionally, it examines the composition of TDD and pair programming/pair
testing as a security- and reliability-enhancing tuple of development practices. The study
will also examine the potential of pair programming/pair testing for improving the
success/retention of socially-oriented women, men, and minorities in the IT workforce.
NSF Award #0346903.
10. Trustworthy Data Sharing and Management for Collaborative Pervasive
Computing Applications. Stephen Yau, Arizona State U. Data sharing and management
in collaborative pervasive computing applications have the following trustworthiness
requirements: (a) flexible and adaptive access control to shared data, (b) efficient and
secure shared data discovery and retrieval/delivery, (c) authentication of group member
devices, (d) scalable and lightweight group key management, (e) detection of attacks and
malicious users, (f) availability of shared data whenever needed, (g) shared data quality
assurance, and (h) inference prevention. This research will generate a new trustworthy
shared data service management technique, including an OWL-based shared data service
specification language, an automated service generation technique, a shared data service
discovery protocol and a lightweight situation-aware access control framework. Our
approach will be based on Web Services architecture, emerging OWL technology and our
Reconfigurable Context-Sensitive Middleware (RCSM) and Secure Group Communication Service
(SGCS). The expected results will be implemented as a set of middleware components and
services. A demonstration application will be developed and used to evaluate the
results.
NSF Award #0430565
11. (ITR) Secure Remote Computing Services. Jason Nieh, Columbia U.
This research will investigate the hypothesis that a combination of lightweight process
migration, remote display technology, overlay-based security and trust-management access
control mechanisms, driven by an autonomic management utility, can result in a significant
improvement in overall system reliability and security. Secure remote computing services
(SRCS) will move all application logic and data from insecure end-user devices, which
attackers can easily corrupt, steal and destroy, to autonomic server farms in physically
secure, remote data centers that can rapidly adapt to computing demands especially in
times of crisis. Users can then access their computing state from anywhere, anytime, using
simple, stateless Internet-enabled devices.
NSF Award #0426623
12. CAREER: XML Middleware for Privacy-Preserving Database Publishing.
Alin Deutsch, U. California -San Diego. This project aims to help data owners deal with
the ever-increasing demand of publishing proprietary data on the Web as XML, in useful
yet controlled ways. One major thrust is the development of tools allowing data publishers
to check that the privacy of sensitive data is not inadvertently violated by what is
exposed to the outside view (even indirectly). The publisher specifies the sensitive
data via a secret query S against the proprietary schema and the tools check that
attackers cannot compute the answer to S (exactly or approximately) from the published
data. The research approach includes development of a hierarchy of privacy guarantees,
qualified by the accuracy of the approximation and the attacker's computational resources.
The second project goal is the delivery of tools for integrating web publishing with the
limited query interfaces supported by current "web services" technologies. The project
seeks solutions for (i) the query set specification using concise, intuitive visual
paradigms and (ii) the rewriting of client queries to execution plans, which invoke only
supported queries. The developed technologies will benefit all categories of data owners
who publish on the Web, from commercial/governmental/academic institutions to private
individuals.
NSF Award #0347968
J. More effective system monitoring, anomaly detection, attack recognition
and defense
1. An Adaptive Integrated Behavior Monitoring and Modeling Approach for High
Performance/High Speed Network Computing Environments. Rayford Vaughn, Susan
Bridges, Yogindar Dandass, Mississippi State Univ. This research focuses on the security
of high performance clusters of computers with dedicated workloads. Because the workloads
are dedicated, hence reasonably stable, anomaly detection may be particularly successful
in identifying incorrect or sabotaged system behavior. This research will employ runtime
multi-resolution system behavior monitoring to collect data used to generate behavioral
models. These models will be applied to detect anomalous behavior. Dedicated coprocessors
will be investigated as a means to offload the modeling and anomaly detection workload
from the operational cluster.
NSF Award #0430354
2. Security Through Interaction Modeling (STIM). Michael Reiter, Bruce
Maggs, Dena Tsamitas, Chenxi Wang, Jeanette Wing, Carnegie Mellon U. Computer misuse is
often easier to recognize in particular instances than it is to specify in general, and is
highly sensitive to experience and context. Nevertheless, few computer security
technologies adequately utilize models of experience and context in defending against
misuse. This research explores the thesis that many computer defenses can be dramatically
improved, in both efficacy and usability, by modeling experience and context in a way
that allows the models to become an integral element for defending the system. The
interactions that can be modeled and potentially exploited are ubiquitous---they exist
among persons (e.g., different user roles in access control), among computers and
networks (e.g., what computers and networks typically correspond with what others), and
even among attacks (e.g., what attacks realize the preconditions of others). Developing
security technologies that better utilize such interactions forms the core of the research
agenda in "security through interaction modeling" (STIM). This effort promises advances in
diverse areas of security technology, such as attack traffic filtering, more usable
authorization systems, and intrusion detection and response. A central goal of the STIM
activity is education and outreach. Its efforts here include the construction of a security
education portal and cybersecurity curricula for many education levels, ranging from
children through college faculty.
NSF Award #0433540
3. CAREER: Biologically Motivated Models for the Dynamics of Computer Networks:
Performance, Growth and Pathological Conditions. Biplab Sikdar, Rensselaer
Polytech Inst. This project will develop biologically motivated models for characterizing
the dynamics of computer networks including their performance, growth and pathological
conditions. In contrast to existing frameworks which typically focus on the steady state
behavior, this work will develop models for the dynamic behavior of networks. The research
focuses on three topics: (1) Models for the dynamics and propagation of network
instabilities which encompass models for malicious worm attacks, network and human factors
influencing them, and developing mechanisms to detect such attacks; (2) Models for the
spatio-temporal aspects of performance metrics like traffic characteristics, delays and
packet losses in large scale networks; (3) Models for the dynamics of wireless networks
like growth patterns, battery consumption and network life, and spatial characteristics
of cluster formations. Four groups of models from biological sciences including population
models, genetic models, models based on pattern formation and epidemic models are used to
address these research topics.
NSF Award #0347623
4. Detection of self-propagating malicious code. Hayder Radha,
Michigan State U.
Statistical and information-theoretic techniques will be used to develop novel methods for
real-time detection of anomalies in network traffic that might be caused by malicious
code. Stochastic modeling techniques will be used to characterize user- and network-level
traffic and provide a basis for defining Neyman-Pearson null hypotheses. Portable software
will be developed to support real-time and adaptive intrusion detection.
NSF Award #0430436
5. DefCOM - Distributed Defense against DDoS Attacks. Jelena Mirkovic,
U. Delaware, Peter Reiher, UCLA. An experimental distributed system to defend against
distributed denial of service (DDoS) attacks will be prototyped and evaluated. The
system, DefCOM, combines victim-end defenses (for attack detection) and source-end
defenses (for response and separation of legitimate traffic from attack traffic).
Backbone routers are enlisted to control attack traffic in partial deployment scenarios
where many potential sources do not deploy a source-end defense. DefCOM's response to
attacks is twofold: defense nodes reduce the attack traffic, freeing the victim's
resources, and they also cooperate to detect legitimate traffic within the suspicious
stream and ensure its correct delivery to the victim. Because networks deploying defense
nodes directly benefit from their operation, DefCOM has a workable economic model to spur
its deployment. DefCOM further offers a framework for existing security systems to join
the overlay and cooperate in the defense. These features can motivate wide deployment, and
may significantly reduce the effects of DDoS attacks.
NSF Award #0430228
6. CAREER: A Multiresolution Approach to Network Anomaly and Intrusion
Detection. Paul Barford, U. Wisconsin. This project aims to measure traffic and
detect anomalies and intrusions in wide area networks. The measurements will support
precise, real-time identification of anomalies/intrusions, enabling future networks to
function securely and efficiently. New, in-situ measurement systems will be built and
deployed to collect traffic data from many locations across the Internet and store it
in a centralized repository in an anonymized, privacy preserving format. Research access
to the data will be provided through a private, secure web interface. In conjunction with
the measurement and data collection activity, the research will develop multiresolution
analysis methods based on wavelets, which isolate distinct traffic characteristics in
both frequency and time, thus enabling accurate and timely detection of
anomalies/intrusions. A library of measurement/analysis tools and the data repository
will be made available to the research community. This project will also develop a
series of laboratory experiments to provide secondary and university students a hand-on
means of learning about the Internet. This effort involves collaboration on both the
measurement and analysis activities with network operators, researchers in industry and
researchers in disciplines outside of networking including applied math, statistics and
signal processing.
NSF Award # 0347252
K. Integrating Hardware and Software for Security
1. SecureCore for Trustworthy Commodity Computing and Communications.
Lee et. al. (See A. Operating Systems 3 above)
NSF Award #0430487
2. Privacy and Surveillance In Wireless Systems. Grunwald et. al.
(See A. Networking. 1 above)
NSF Award #0430593
3. Trustworthy and Resilient Location Discovery in Wireless Sensor
Networks. Ning, et. al. (see A. Networking. 2 above).
NSF Award #0430223
4. Design and Implementation of Hydra: A Platform for Survivable and Secure
Storage Systems. Lihao Xu, (See A. Storage. 1 above)
NSF Award #0430224
ALTERNATIVE AWARD CLASSIFICATIONS
As noted above, no single classification scheme for a set of awards can meet all
needs. Listed below are the types of awards in this set, followed three additional sets
of technical categories that can also be used to organize these awards, linked to the
paragraphs above. A matrix categorizing each award according to these classifications,
with hyperlinks to the Fastlane abstract for the award, is provided in the PDF document
below.
Award Type Classifications for FY04 NSF core cyber security awards
- Cyber Trust Center Scale Activity: B.3, J.2
- Cyber Trust Team award: A.OS.2, A.OS.3,
B.1, B.2, B.4, D.1,
E.1, E.3, F.1, F.2,
F.3, F.4, H.1, H.2,
J.1
- Cyber Trust Individual/Small Group award: A.Net.1,
A.Net.2, A.Net.4, A.Net.5,
A.OS.1, A.Sto.1, C.1,
D.2, D.3, E.2, E.4,
G.1, I.1, I.10, I.2,
I.5, J.4, J.5
- Cyber security related CAREER award: A.Net.3, I.12,
I.3, I.4, I.6, I.7,
I.8, I.9, J.3, J.6
- Cyber security related ITR award: A.Net.6, C.2,
E.5, E.6, I.11
Technical area classifications for FY04 NSF core cyber security awards
A. Cyber Trust solicitation (broad categories)
- Multi-disciplinary / interdisciplinary: C.1, D.1,
D.2, D.3, E.1, E.5
- Application: B.2, B.4, E.2,
E.4, E.6, G.1,
I.12, I.3
- Systems: A.OS.1, A.OS.2,
A.OS.3, A.Sto.1, C.2,
I.10, I.11, I.8,
J.1, J.2
- Networking: A.Net.1, A.Net.2,
A.Net.3, A.Net.4, A.Net.5,
A.Net.6, B.1, B.3,
F.2, J.3, J.4, J.5,
J.6
- Foundations: E.3, F.1, F.3,
F.4, H.1, H.2, I.1,
I.2, I.4, I.5, I.6,
I.7, I.9
B. Security life cycle phase
- Understanding what to build: A.Net.1, C.1,
C.2, D.1, D.2, E.1,
E.4, E.6, F.1, F.3,
I.1, I.2, I.4
- Building things right: F.4, H.2,
I.12, I.5, I.6,
I.7, I.8, I.9
- Preventing attacks: A.Net.3, A.Net.4,
A.OS.1, A.OS.3, B.4,
E.2, E.5, H.1, I.10,
I.11, I.3
- Detecting / understanding attacks: A.Net.5,
A.Net.6, D.3, E.3,
J.1, J.2, J.3, J.4,
J.6
- Surviving attacks: A.Net.2, A.OS.2,
A.Sto.1, F.2, G.1,
J.5
- System Recovery / reconstitution:
- Forensics / dealing with perpetrators: B.1, B.2,
B.3
C. Security disciplines
- Operating system, filesystem, storage security: A.OS.1,
A.OS.2, A.OS.3, A.Sto.1
- Net security: A.Net.1, A.Net.2,
A.Net.3, A.Net.4, A.Net.5,
A.Net.6, B.3, D.1,
J.4, J.5, J.6
- Application / Database / Web security: B.2, B.4,
C.2, D.3, E.5, G.1
- Cryptography and applied crypto : E.4, F.4,
I.1, I.2, I.3, I.4
- Security/Privacy/Trust modeling and specification: C.1,
E.1, E.2, E.6,
I.12, J.2, J.3
- Secure system architecture: F.1, F.2,
F.3, I.10, I.11, J.1
- Secure system development: H.2, I.5,
I.6, I.7, I.8, I.9
- Security testing / evaluation: D.2, E.3,
H.1
- Forensics: B.1
Cyber Security FY04 Award Summary (PDF 187 KB)
To view PDF documents, you must have Acrobat Reader
|
|
|