text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
Search  
Awards
design element
Search Awards
Recent Awards
Presidential and Honorary Awards
About Awards
Grant Policy Manual
Grant General Conditions
Cooperative Agreement Conditions
Special Conditions
Federal Demonstration Partnership
Policy Office Website


Award Abstract #0430303
Secure Personalization: Building Trustworthy Recommender Systems


NSF Org: IIS
Division of Information & Intelligent Systems
divider line
divider line
Initial Amendment Date: September 8, 2004
divider line
Latest Amendment Date: October 2, 2007
divider line
Award Number: 0430303
divider line
Award Instrument: Continuing grant
divider line
Program Manager: James C. French
IIS Division of Information & Intelligent Systems
CSE Directorate for Computer & Information Science & Engineering
divider line
Start Date: September 15, 2004
divider line
Expires: August 31, 2008 (Estimated)
divider line
Awarded Amount to Date: $300000
divider line
Investigator(s): Robin Burke rburke@cs.depaul.edu (Principal Investigator)
Bamshad Mobasher (Co-Principal Investigator)
divider line
Sponsor: DePaul University
1 East Jackson Boulevard
Chicago, IL 60604 312/341-8000
divider line
NSF Program(s): ITR-CYBERTRUST,
DATA AND APPLICATIONS SECURITY
divider line
Field Application(s): 0104000 Information Systems
divider line
Program Reference Code(s): HPCC,9218,7254
divider line
Program Element Code(s): 7456,7228

ABSTRACT

The purpose of this research is to explore the vulnerabilities of recommendation and personalization systems in the face of malicious attacks, explore techniques for enhancing their robustness, and examine methods by which attacks can be recognized and possibly defeated. Most research in computer security focuses on protecting assets inside an organization's security perimeter from unauthorized access and modification. This project examines the problem of security for systems that are designed to be accessed and modified by the general public. How do we protect such a system from the legal but biased inputs of an attacker trying to subvert its functionality? The project will advance our understanding of the trustworthiness of recommender systems, now a crucial component in many areas from e-commerce and e-learning to content management systems. We will explore the spectrum of possible attacks against recommendation systems, and develop formal models characterizing these attacks and their impacts. We will investigate different metrics for assessing the robustness of recommendation algorithms including accuracy, stability and expected payoff to the attacker. In tandem with this theoretical work, we will conduct empirical investigations using data from a variety of domains. We will test a range of recommendation algorithms including user-based, item-based and model-based collaborative recommenders, and also explore hybrid recommendation by combining collaborative recommendation techniques with content-based and knowledge-based ones. Finally, informed by these results, we will consider how recommender systems can be secured, through improved algorithms but also by detecting attacks and responding appropriately. Our research will have significant implications for a variety of adaptive information systems that rely on users' input for learning user or group profiles. Many such systems have open components through which a malicious user or an automated agent can affect the overall system behavior.


PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

(Showing: 1 - 7 of 7).

Bamshad Mobasher, Robin Burke, JJ Sandvig.  "Model-Based Collaborative Filtering as a Defense Against Profile Injection Attacks,"  Proceedings of the 21st National Conference on Artificial Intelligence (AAAI'06),  2006, 

Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams.  "Towards Trustworthy Recommender Systems: An Analysis of Attack Models,"  ACM Transactions on Internet Technology,  v.7(4),  2007,  p. 23.

Robin Burke, Bamshad Mobasher, Chad Williams, Runa Bhaumik.  "Classification Features for Attack Detection in Collaborative Recommender Systems,"  Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,  2006, 

Robin Burke, Bamshad Mobasher, Chad Williams, Runa Bhaumik.  "Detecting Profile Injection Attacks in Collaborative Recommender Systems,"  Proceedings of the 8th IEEE Conference on E-Commerce Technology (CEC' 06),  2006, 

Robin Burke, Bamshad Mobasher, Runa Bhaumik, Chad Williams.  "Segment-Based Injection Attacks against Collaborative Filtering Recommender Systems,"  Proceedings of the 2005 International Conference on Data Mining (ICDM'05),  2005,  p. 577.

Robin Burke, Bamshad Mobasher, Runa Bhaumik, Chad Williams.  "Segment-Based Injection Attacks against Collaborative Filtering Recommender Systems,"  Proceedings of the 2005 International Conference on Data Mining (ICDM'05),  2005,  p. 577.

Sandvig, J.J.; Mobasher, B.; Burke, R..  "Robustness of Collaborative Recommendation Based on Association Rule Mining,"  Recommender Systems 2007,  2007,  p. 105.


(Showing: 1 - 7 of 7).

 

Please report errors in award information by writing to: awardsearch@nsf.gov.

 

 

Print this page
Back to Top of page
  Web Policies and Important Links | Privacy | FOIA | Help | Contact NSF | Contact Web Master | SiteMap  
National Science Foundation
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel: (703) 292-5111, FIRS: (800) 877-8339 | TDD: (800) 281-8749
Last Updated:
April 2, 2007
Text Only


Last Updated:April 2, 2007