Find it like a dog: Using Gesture to Improve Object Search

Document Type

Conference Proceeding

Role

Author

Journal Title

Proceedings of the Annual Meeting of the Cognitive Science Society

Volume

46

First Page

167

Last Page

175

Publication Date

2024

Abstract

Pointing is an intuitive and commonplace communication modality. In human-robot collaborative tasks, human pointing has been modeled using a variety of approaches, such as the forearm vector or the vector from eye to hand. However, models of the human pointing vector have not been uniformly or comprehensively evaluated. We performed a user study to compare five different representations of the pointing vector and their accuracies in identifying the human's intended target in an object selection task. We also compare the vectors' performances to that of domestic dogs to assess a non-human baseline known to be successful at following human points. Additionally, we developed an observation model to transform the vector into a probability map for object search. We implemented our system on our robot, enabling it to locate and fetch the user's desired objects efficiently and accurately.

Share

COinS