Towards Robust Gesture Synthesis for Social Robots

Towards Robust Gesture Synthesis for Social Robots

In my current research, I am working with a group of graduate students in making social robots recognize the task relevant objects and point to such objects during referring expression generations. The picture above shows the research group at the Mines Interactive Robotics Research Lab (MIRRORLab) supervised by Dr. Tom Williams. We are focusing on enabling deictic gestures on robots using the Distributed, Integrated, Affect, Reflection, Cognition (DIARC) architecture, so that deictic gestures can be requested by other architectural components.

Integration with humanoid robot Pepper and DIARC architecture

I am integrating Pepper with the Distributed, Integrated, Affect, Reflection, Cognition (DIARC) architecture, as implemented in the Agent Development Environment (ADE) robot middleware. ADE supports robust services for the distribution of complex robotic architectures over multiple computers. If any of the components fail, the ADE Registry can restart them automatically and allow the functionality to go back into effect. Moreover, DIARC provides language-oriented components which provide reference resolution and referring expression generation capabilities. DIARC’s Pepper components leverage Softbank Robotics’ Naoqi modules (e.g. motion, vision, and sensors) to control the robot, and are implemented as Java classes to allow communication with other ADE components and the ADE Registry. This integrated approach allows Pepper’s components to take advantage of DIARC’s high-level cognitive representations, thus enabling human-like tasking through natural language.

What is a Wizard-of-Oz interface?

Wizard-of-Oz interfaces have been widely used within the Human-Robot Interaction community. This experimental tool allows robots’ actions or behaviors to be controlled by a human puppeteer to fill in a piece of technology not yet implemented while a participant conducts a task. A well-designed Wizard-of-Oz control interface can provide better insights into the robot’s limitations, allow fast, iterative testing of robots, and help explore possible robot behaviors such as nonverbal feedback and natural language interfaces. My short-term goal is developing a first-person point-and-click graphical user interface (GUI). Experimenters can use this GUI in Wizard-of-Oz style experiments to see live camera stream from the robot and dynamically generate targeted deictic gestures. We propose to generate animated vectors within the GUI to give an estimation of the current distance between the robot and the target referent. Such visualization provides an interactive interface for the experimenters to easily test gesture generations with human participants as well as explore the interaction design space for robot behaviors.

For this research, my long-term future work is combining methods from computer vision and machine learning to automatically learn what type of deictic gestures are most appropriate based on the current context, so that robots may generate a wider class of deictic gestures beyond pointing depending on the visibility or occlusion of the target referent and geometric constraints such as the robot’s current distance from the target referent and the object’s size.

Weekly meeting

During the weekly all-hands meeting, I report on successes and challenges to the team and my advisor Dr. Tom Williams. I discuss several approaches to a problem and share their pros/cons with the team. After succeeding in implementing a feature, I write a Wiki or “Lesson Learned” document so my teammates can quickly replicate what I have done. I submitted a paper about this project to the 2018 Human-Robot Interaction Pioneers Workshop.

Submit a Comment

Your email address will not be published. Required fields are marked *