Paper Award: The Nanjing City Prizes
A photo of the experimental setup at the Intelligent Robotics and Automation Laboratory (IRAL) for pilot studies in child-robot interaction scenarios.
We are pleased to announce that a research team from the Intelligent Robotics and Automation Laboratory (IRAL) of our School of ECE of the NTUA has received a paper award (Nanjing City Prize) at the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’2018), which took place at the city of Nanjing, China, 28-31 August 2018.
Title of the awarded late-breaking research paper:
“A framework for robot learning during child-robot interaction with human engagement as reward signal”
This paper relates to an on-going research which aims at exploring the application of new adaptive reinforcement robot learning schemes in scenarios that involve social interaction of robots with children facing autism-spectrum disorder (ASD) conditions.
Research Team: Mehdi Khamassi (Visiting Researcher, CNRS/France), Georgia Chalvatzaki (PhD student), Theodore Tsitsimis (graduate student), George Velentzas (graduate student) and Costas Tzafestas (Associate Professor, team leader).
Abstract: Using robots as therapeutic or educational tools for children with autism requires robots to be able to adapt their behavior specifically for each child with whom they interact. In particular, some children may like to be looked into the eyes by the robot while some may not. Some may like a robot with an extroverted behavior while others may prefer a more introverted behavior. Here we present an algorithm to adapt the robot’s expressivity parameters of action (mutual gaze duration, hand movement expressivity) in an online manner during the interaction. The reward signal used for learning is based on an estimation of the child’s mutual engagement with the robot, measured through non-verbal cues such as the child’s gaze and distance from the robot. We first present a pilot joint attention task where children with autism interact with a robot whose level of expressivity is pre-determined to progressively increase, and show results suggesting the need for online adaptation of expressivity. We then present the proposed learning algorithm and some promising simulations in the same task. Altogether, these results suggest a way to enable robot learning based on non-verbal cues and to cope with the high degree of nonstationarities that can occur during interaction with children.
This research is funded by the European Commission, EU Project BabyRobot (H2020-ICT-24-2015, grant agreement no. 687831, http://babyrobot.eu/).