Gestures may be as important as words
Brown University
Whether in the kitchen or on a workshop floor, robot assistants that can fetch items for people could be extremely useful. Now, a team of Brown University researchers has developed a way of making robots better at figuring out exactly which items a user might want them to retrieve.
The new approach enables robots to use inputs from both
human language and gesture as they reason about how to locate and retrieve
target objects. In a study that
will be presented on Tuesday, March 17, during the International Conference on
Human-Robot Interaction in Edinburgh, Scotland, the researchers show
that the approach had an 89% success rate in finding the correct object in
complex environments, outperforming other object retrieval approaches.
“Searching for things requires a robot to navigate large environments,” said Ivy He, a graduate student at Brown and the study’s lead author. “With current technology, robots are pretty good at identifying objects, but when the environment is cluttered, things are moving around or things are hidden by other objects, that makes things much more difficult. So this work is about using both language and gesture to help in that search task.”
The research makes use of an approach to robot planning
called a POMDP (partially observable Markov decision process), a mathematical
framework that allows a robot to reason under uncertainty. In the real world,
robots rarely have a perfect understanding of the world. Different types of
objects can look similar. There may be more than one of a particular object in
a room. Items might be partially or completely hidden from view.

.webp)
.webp)




.webp)







