text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
Search  
Awards
design element
Search Awards
Recent Awards
Presidential and Honorary Awards
About Awards
Grant Policy Manual
Grant General Conditions
Cooperative Agreement Conditions
Special Conditions
Federal Demonstration Partnership
Policy Office Website


Award Abstract #0430402
COLLABORATIVE RESEARCH: Privacy-aware Information Release Control


NSF Org: IIS
Division of Information & Intelligent Systems
divider line
divider line
Initial Amendment Date: September 22, 2004
divider line
Latest Amendment Date: September 2, 2008
divider line
Award Number: 0430402
divider line
Award Instrument: Standard Grant
divider line
Program Manager: Le Gruenwald
IIS Division of Information & Intelligent Systems
CSE Directorate for Computer & Information Science & Engineering
divider line
Start Date: October 1, 2004
divider line
Expires: September 30, 2009 (Estimated)
divider line
Awarded Amount to Date: $186636
divider line
Investigator(s): Sushil Jajodia jajodia@gmu.edu (Principal Investigator)
Claudio Bettini (Co-Principal Investigator)
divider line
Sponsor: George Mason University
4400 UNIVERSITY DR
FAIRFAX, VA 22030 703/993-2295
divider line
NSF Program(s): ,
ITR-CYBERTRUST
divider line
Field Application(s): 0104000 Information Systems
divider line
Program Reference Code(s): HPCC,9218,9216,7484,7254
divider line
Program Element Code(s): T855,T397,H343,7456

ABSTRACT

With rapid advancements in computer and network technology, it has become possible for an organization to collect, store, and retrieve vast amounts of data of all kinds quickly and efficiently. Data is of strategic and operational importance to many organizations. At the same time, these large information systems represent a potential threat to individual privacy since they contain a great amount of detailed information about individuals. Privacy of individual data handled poorly not only violates the fundamental rights of individuals and relevant federal and state laws, it is also a liability to businesses in terms of their trustworthiness and eventually their bottom line. Therefore, there is an urgent need of technology that can be adopted by organizations and businesses to protect the privacy of individuals without impeding the flow of information that is necessary to achieve their strategic and operation goals. Although this urgent need is reflected in the recent increase of research activities in the privacy area, there are several problems, especially related to a privacy-aware data release system, that are yet to be addressed. The essential questions include: when a piece of data is released, to what extent privacy of individuals is lost? If the loss is excessive, how do we modify the data to be released in a way that permits maximum flow of information while preserving privacy at the same time?

The starting point of this project is the realization that privacy concerns take different forms for different data sets. In order to preserve the privacy of individuals, the privacy concerns must be formalized. When data is released, whether used in privacy-preserving data mining or simply published to the third party or the general public, these privacy rules need to be satisfied. This is termed privacy-aware information release control. Two general approaches are adopted: query anonymization and online data checking. Query anonymization means that all queries are to be evaluated to see how much privacy is disclosed through the query. If the query discloses too much, some changes will be made so that the privacy level will be maintained. Here, the technical challenge is how to ensure that the system will release the maximum information but without any privacy violation. Online data checking means that when data is released, privacy rules will be checked on the to-be-released data to find any privacy violation. The technical challenge of online checking is its efficiency. These two methods are complementary to each other and can sometimes be used together in a practical system. The above techniques are based on knowing the privacy level that the data requester is allowed to have. Once data is released, depending on the level of private data contained in the output, some obligations may be attached. This project also tackles the problems related to management of such obligations.


PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

(Showing: 1 - 6 of 6).

Chao Yao.  "Protecting Privacy in Released Database Views,"  Ph.D. Dissertation, George Mason University,  2006, 

Chao Yao, Lingyu Wang, X. Sean Wang, Sushil Jajodia.  "Indistinguishability: the other aspect of privacy,"  Proc. 3rd VLDB Workshop on Secure Data Management (SDM'06), Springer Lecture Notes in Computer Science, Vol. 4165,  2006,  p. 1.

Chao Yao, X. S. Wang, Sushil Jajodia.  "Checking for k-Anonymity Violation by Views,"  VLDB 2005,  2005,  p. 910.

Claudio Bettini, X. S. Wang, Sushil Jajodia..  "Protecting Privacy Against Location-Based Personal Identification,"  VLDB Workshop on Secure Data Management,  2005, 

S. Mascetti, C. Bettini, X.S. Wang, S. Jajodia.  "k-Anonymity in Databases with Timestamped Data,"  Proc. of 13th International Symposium on Temporal Representation and Reasoning,  2006, 

X. S. Wang and S. Jajodia.  "Privacy Protection with Uncertainty and Indistinguishability,"  Digital Privacy: Theory, Technologies, and Practices, Taylor and Francis,  2007, 


(Showing: 1 - 6 of 6).

 

Please report errors in award information by writing to: awardsearch@nsf.gov.

 

 

Print this page
Back to Top of page
  Web Policies and Important Links | Privacy | FOIA | Help | Contact NSF | Contact Web Master | SiteMap  
National Science Foundation
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel: (703) 292-5111, FIRS: (800) 877-8339 | TDD: (800) 281-8749
Last Updated:
April 2, 2007
Text Only


Last Updated:April 2, 2007