Human-robot joint action is a key requirement in many advanced robotic applications where robots are not only
expected to work alongside humans but also collaborate with them in the performance of physical tasks. Robots are already programmed to model and predict human actions in order to ensure smooth collaboration and overall task efficiency. However, little is known on how humans represent and account for robot’s actions as part of their own plans. This paper presents a first joint psychological and HRI user study designed to answer this question in the context of human-robot handover scenarios.
Our analysis showed that the participants had a positive userexperience of the interaction and adopted gaze patterns similar to a large extent to the ones in human-to-human handover tasks.
The EEG analysis suggests that, compared to solo action, the human participants were at a state of higher motor readiness when they prepared to hand over the object to the robot either because they represented the robot’s action in advance or alternatively that they anticipated that passing the object to the robot would be a more effortful action, thus highlighting the increased demands in planning a human-to-robot interaction.
Our findings highlight the value of gaze as a positive method of non-verbal communication in HRI and provides new insights in the neural mechanisms that allows a person to plan an effective interaction with a robot.
ORCID of Submitting Authorhttps://orcid.org/0000-0001-9473-8636
Submitting Author's InstitutionPAL Robotics
Submitting Author's CountrySpain