AI and Emotion February 23, 2007Posted by Johan in Cognition, Connectionism, Neural Networks, Social Neuroscience.
There is a story on BBC news about FEELIX Growing, a project which uses neural networks to design robots that will interact with humans in flexible, adaptive ways. Interestingly, the researchers seem to think that the way to achieve this is to design the software so that it is sensitive to human emotion:
“The human emotional world is very complex but we respond to simple cues, things we don’t notice or we don’t pay attention to, such as how someone moves,” said Dr Canamero, who is based at the University of Hertfordshire.
From the FEELIX Growing website:
Adaptation to incompletely known and changing environments and personalization to their human users and partners are necessary features to achieve successful long-term integration. This integration would require that, like children (but on a shorter time-scale), robots develop embedded in the social environment in which they will fulfil their roles.
It’s an interesting proposition, that robots should show some sensitivity to emotion in order to appear more human. The researchers will apparently rely on off-the-shelf robots with custom software, so don’t count on this leading to a robot that passes the Turing test. Nevertheless, acknowledging the role of affect in human-computer interactions is bound to be a good idea. In fact, this is already happening: supposedly, phone queueing software is designed to detect swearing, so that angry callers can be bumped to an operator immediately.