A Cognitive Model for Embodied Gesture Processing in Virtual Agents
Human social interaction encompasses verbal and nonverbal aspects in which hand gesture is a widely used nonverbal communication way. Handling gestures requires different cognitive and knowledge levels, from motor skills to social intentions behind those movements.
During social interaction all of these levels can be used for cognitive processes of both perception and generation, as evidenced by neuropsychological studies. In this video, motor skills of gestures at different levels of abstraction from movement trajectories towards higher levels of semantics is discussed. In this context, the aim to engage humanoid virtual agents in social interaction with humans.
To this end, the creators developed a cognitive computational model which tries to capture the embodied basis of hand gestures and we demonstrate how it enables to combine and bootstrap the online learning, perception, recognition and generation of gestural movements in a human-agent interaction scenario.