Email this Article Email   

CHIPS Articles: Oh, Baby! ONR Research Links Child's Play to Robot Learning

Oh, Baby! ONR Research Links Child's Play to Robot Learning
By Warren Duffie, Office of Naval Research - April 7, 2016
ARLINGTON, Va. — The future of human-robot partnerships could be revolutionized by child's play — specifically, the play of babies.

A team of researchers led by Dr. Rajesh Rao, a professor of computer science and engineering at the University of Washington, recently published a paper showing how robots can learn much like children-amassing data by watching adults do something, determining the goal of the action and then deciding how to perform it on their own. Rao's work is sponsored by the Office of Naval Research (ONR). View the paper here.

"This is a major step in designing robots that can learn from watching humans," said Dr. Micah Clark, a program officer in ONR's Warfighter Performance Department who oversees Rao's research. "It could one day result in truly intelligent machines that understand the intent and goals behind certain tasks, and help humans achieve those goals."

For decades, scientists, writers and filmmakers have envisioned a future where robots make human life safer and easier-doing mundane household chores or helping troops in battle.

Rao believes this type of artificial intelligence might be achieved with inspiration from the most adorable and inquisitive of humans-babies.

"Babies learn about the world around them through play," said Rao, "grabbing toys, pulling them apart, banging them on the floor or pushing them off tables. This self-exploration helps babies learn the physics of their environments, and how their actions influence objects."

Rao collaborated with Dr. Andrew Meltzoff, a respected child psychologist and co-director of the Institute for Learning and Brain Sciences at the University of Washington. Meltzoff's work (not sponsored by ONR) shows that children as young as 18 months can infer the goal of an adult's actions and develop ways of reaching that goal themselves.

Using data from behavioral tests conducted by Meltzoff involving babies, Rao's team designed a machine-learning model to allow robots to explore how their actions result in diverse outcomes.

They tested the model in two types of experiments. The first was a "gaze" computer simulation where the robot learned to track the head movements of others to determine where they were looking. The second involved the robot watching humans move toys around on a tabletop, and then being left to play with the toys on its own.

Rao's team observed several patterns. After trial and error, the robot was able to figure out the consequences of its actions on the toys. It learned, for example, that a particular toy was harder to pick up than push, and that pushing a toy too close to the edge would make it fall.

The robot could observe and infer the goal of a human action on a toy, and achieve the same goal but with a different action it considered more reliable. For example, it could push instead of pick up a toy and place it at a particular table spot. It could even signal for human help when it felt its actions were too unreliable.

"To get a robot to perform a task like picking up a toy, you normally have to code instructions or physically move a robotic limb with a joystick or other controller," said Rao. "Our research might make it possible for people to eventually train and program robots through demonstration and speech, much like parents teach their children. This would be useful to our military in jobs like disarming explosive devices, fighting fires, transporting heavy equipment or going into combat zones, where there is a premium on teaching robots new skills on-the-fly."

Rao and his team plan on scaling up their learning model and design more sophisticated robots to perform more complex tasks. His work is part of ONR's Science of Autonomy Program.

Warren Duffie is a contractor for ONR Corporate Strategic Communications.

Office of Naval Research Turns 70

ONR celebrates 70 years of innovation in 2016. For seven decades, ONR through its commands-including ONR Global and the Naval Research Laboratory in Washington, D.C.-has been leading the discovery, development and delivery of technology innovations for the Navy and Marine Corps.

A University of Washington researcher conducts an experiment in which a robot learns to track human head movements. Sponsored by the Office of Naval Research, the work is led by Dr. Rajesh Rao, studying whether robots can learn through human speech or body movement. Photo courtesy of Dr. Rajesh Rao.
A University of Washington researcher conducts an experiment in which a robot learns to track human head movements. Sponsored by the Office of Naval Research, the work is led by Dr. Rajesh Rao, studying whether robots can learn through human speech or body movement. Photo courtesy of Dr. Rajesh Rao.
Related CHIPS Articles
Related DON CIO News
Related DON CIO Policy
CHIPS is an official U.S. Navy website sponsored by the Department of the Navy (DON) Chief Information Officer, the Department of Defense Enterprise Software Initiative (ESI) and the DON's ESI Software Product Manager Team at Space and Naval Warfare Systems Center Pacific.

Online ISSN 2154-1779; Print ISSN 1047-9988