text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
Search  
Awards
design element
Search Awards
Recent Awards
Presidential and Honorary Awards
About Awards
Grant Policy Manual
Grant General Conditions
Cooperative Agreement Conditions
Special Conditions
Federal Demonstration Partnership
Policy Office Website


Award Abstract #0081935
ITR: Augmented Reality and Computer Vision for Enhancing Human Assembly Skills


NSF Org: IIS
Division of Information & Intelligent Systems
divider line
divider line
Initial Amendment Date: September 7, 2000
divider line
Latest Amendment Date: June 26, 2002
divider line
Award Number: 0081935
divider line
Award Instrument: Continuing grant
divider line
Program Manager: Ephraim P. Glinert
IIS Division of Information & Intelligent Systems
CSE Directorate for Computer & Information Science & Engineering
divider line
Start Date: September 1, 2000
divider line
Expires: August 31, 2003 (Estimated)
divider line
Awarded Amount to Date: $449674
divider line
Investigator(s): Rajeev Sharma rsharma@advancedinterfaces.com (Principal Investigator)
divider line
Sponsor: Pennsylvania State Univ University Park
110 Technology Center Building
UNIVERSITY PARK, PA 16802 814/865-1372
divider line
NSF Program(s): INFORMATION TECHNOLOGY RESEARC
divider line
Field Application(s): 0104000 Information Systems
divider line
Program Reference Code(s): HPCC, 9218, 1660, 1654
divider line
Program Element Code(s): 1640

ABSTRACT

This is the first year funding of a three-year continuing award. This project aims at addressing basic issues for enabling an augmented reality interface using computer vision. Augmented Reality has the goal of enhancing a person's perception of the surrounding world, offering the potential for the computer to be integrated into the activities of a user, serving as a personalized helper. There are two key challenges to enable such an interface. The first is "sensing", which would allow the augmentation to be matched to the state of the world as the user interacts with it. The second challenge is that of developing systematic "augmentation schemes" that result in user-centered information flow. To aid in the conceptualization of the problem and for experimental verification, the main focus of the project is on an "assembly domain". The context is that of a human engaged in assembling a mechanical object from its components. The focus on the "assembly domain", allows us to suitably formulate and address the sensing, augmentation, and other issues in the novel human-computer interface. At the same time, it allows us to examine specific interactive assembly tasks using augmented reality, such as guiding and training during assembly and for evaluation of prototype assembly sequences. To address the key problem of tracking the "context", the project seeks to advance the state of art of computer vision techniques for recognizing assembly states. A combination of appearance-based and CAD-based approach will be used for addressing the problem of simultaneously tracking a large number of known assembly parts. A probabilistic approach is proposed to improve the performance of assembly state recovery over time as the assembly task progresses. Another focus is to look for efficient approaches for building the model spaces for subassemblies with larger number of parts. Geometric modeling and analysis of the assembly domain will be utilized in the development of systematic flow of augmentation to aid various assembly tasks. Two experimental setups will be used, involving a see-through head-mounted display and a computer monitor with graphics overlaid on live video. Specific assembly task scenarios will be devised to determine the practical feasibility of augmented reality interface for the assembly domain. Another goal will be to experimentally evaluate the effectiveness of augmented reality interface as a training tool compared to other means of instructions. The resulting augmented reality interface is expected to impact many applications that benefit from the on-line, scene-dependent presentation of multimodal information, such as assembly prototyping, aircraft maintenance, repair of space vehicles, cost-effective training of factory workers, and perhaps guiding inexperienced users through complex repair of machinery.

 

Please report errors in award information by writing to: awardsearch@nsf.gov.

 

 

Print this page
Back to Top of page
  Web Policies and Important Links | Privacy | FOIA | Help | Contact NSF | Contact Web Master | SiteMap  
National Science Foundation
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel: (703) 292-5111, FIRS: (800) 877-8339 | TDD: (800) 281-8749
Last Updated:
April 2, 2007
Text Only


Last Updated:April 2, 2007