Award Abstract #0541230
Data-Driven Appearance Transfer for Realistic Image Synthesis
NSF Org: |
CCF
Division of Computer and Communication Foundations
|
|
|
Initial Amendment Date: |
January 18, 2006 |
|
Latest Amendment Date: |
August 8, 2007 |
|
Award Number: |
0541230 |
|
Award Instrument: |
Continuing grant |
|
Program Manager: |
Lawrence Rosenblum
CCF Division of Computer and Communication Foundations
CSE Directorate for Computer & Information Science & Engineering
|
|
Start Date: |
February 1, 2006 |
|
Expires: |
January 31, 2009 (Estimated) |
|
Awarded Amount to Date: |
$311924 |
|
Investigator(s): |
Alexei Efros efros@cs.cmu.edu (Principal Investigator)
|
|
Sponsor: |
Carnegie-Mellon University
5000 Forbes Avenue
PITTSBURGH, PA 15213 412/268-8746
|
|
NSF Program(s): |
GRAPHICS & VISUALIZATION, COMPUTING PROCESSES & ARTIFACT, INFORMATION TECHNOLOGY RESEARC
|
|
Field Application(s): |
0000912 Computer Science
|
|
Program Reference Code(s): |
HPCC, 9251, 9218
|
|
Program Element Code(s): |
7453, 7352, 1640
|
ABSTRACT
Realistic image synthesis is a central goal of computer graphics. Major recent advances have allowed researchers to model a wide spectrum of complicated visual phenomena with a very high degree of realism. Yet, even the best computer-generated feature films are a far cry from what one might consider "real". Curiously, the problem is generally not with computer graphics being unable to model the physics of the everyday visual world -- the problem is with the world itself! It's just too complex, too noisy, too rich and vivid to be recreated from scratch by even the most skilled and patient artist.
One solution is to use image-based methods and directly capture visual appearance of everything in the world -- if only it was feasible. Instead, this research effort centers on transferring appearance from a large database of stored visual data into a novel scene. The reason is that while capturing details of a particular scene is very expensive and time-consuming, obtaining similar information from some relevant scene is relatively easy. There is a tremendous amount of visual data that is already captured and available - thousands of webcams all over the world, millions of photographs placed on the Internet, depicting anything from sandstorms in Sahara to the glaciers in Alaska. And more data is being added every day. Our research is developing a unified approach for appearance transfer. Two broad scenarios are considered: transfer in image stacks (e.g. webcams) and single image transfer. In both cases, the major research issues involve: (1) grouping images and image stacks into regions with coherent material/geometry properties, (2) determining correspondence between various groups in the scene and the database, (3) and finally transferring the correct appearance from the database by combining it with the large-scale structure of the input scene.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
(Showing: 1 - 2 of 2).
James Hays and Alexei A. Efros.
"Scene completion using millions of photographs,"
Communications of the ACM,
v.51,
2008,
M.H. Nguyen, J.-F. Lalonde, A.A. Efros, F. de la Torre.
"Image-based Shaving,"
Computer Graphics Forum,
v.27,
2008,
(Showing: 1 - 2 of 2).
Please report errors in award information by writing to: awardsearch@nsf.gov.
|