Modelling interaction between Real Humans and Virtual Humans (VH) or Social Robots

This half-day tutorial will discuss ongoing research on Human-Machine Interaction, in particular between real humans and Virtual Humans or Social Robots.

Discussion will include what are the important problems to solve and what is still missing from convincing simulations. The research area is highly interdisciplinary as it includes computer animation, computer vision, computer graphics, speech analysis, decision making and avatar/social robot emotion and motion modelling. A generic Virtual Humans/social robot platform will be also shown and discussed to introduce the various research projects that led to finalize the interaction.

Major methods of gestures recognition will be shown, such as gaze, hand and body recognition. Some AI techniques for recognition will be explained, particularly the recognition of hand gestures. Part of the tutorial will discuss the recognition and analysis of emotions.

An introduction to the personality, mood and emotions of Virtual Humans/social robots will be explained and demonstrated as well as the classification of emotions of VH. Social agents with strong, interesting personalities lead to memorable interactions for users.  Varying nonverbal communication provides one mechanism for crafting an agent's perceived personality.  We will discuss both the concept of personality and how it is influenced by movement variation.

Social agents ought to show a great variety of multimodal behaviors when interacting with human users. Computational models to drive the generation of multimodal behaviors will be presented. We will focus on communicative behaviors as well as emotional and social attitude expressions.

We will also discuss episodic memory, which provides social entities with the abilities of remembering and recalling what has happened, adapting to user preferences and learning from past experience.

A generic Virtual Humans/social robot platform will be shown and discussed. Several research case studies will be shown to illustrate the problems. Examples of Virtual humans and realistic human-like social robots, such as Nadine (https://en.wikipedia.org/wiki/Nadine_Social_Robot), interacting with real humans will also be shown. At the end of the course, attendees should have a clear view of the research in the area.

Organizer: Daniel Thalmann, EPFL, Switzerland

Lecturers:

Catherine Pelachaud, Sorbonne, Paris, France

Michael Neff, University of California, Davis, USA

Daniel Thalmann, EPFL, Switzerland

 

Short bios


Daniel Thalmann has chaired the VRLab at EPFL until recently and is now an Honorary Professor of EPFL. He is also director for research and development at MIRALab Sarl. He is one of the pioneers in research on Virtual Humans. He received his PhD in Computer Science in 1977 at the University of Geneva. He was Awarded the Eurographics Distinguished Career Award in 2010 and the Canadian Human Computer Communications Society Achievement Award in 2012. He received an Honorary Doctorate from University Paul- Sabatier in Toulouse, France, in 2003.

Catherine Pelachaud is Director of Research in the laboratory ISIR, Sorbonne University. Her research interest includes embodied conversational agent, nonverbal communication, expressive behaviors and socio-emotional agents. With her research team, she has been developing an interactive virtual agent platform Greta that can display emotional and communicative behaviors. She is recipient of the ACM SIGAI Autonomous Agents Research Award 2015 and was honored the title Doctor Honoris Causa of University of Geneva in 2016.

Michael Neff is a Professor in Computer Science and Cinema & Digital Media at the University of California, Davis where he leads the Motion Lab, an interdisciplinary research effort in character animation and embodied interaction. He holds a Ph.D. from the University of Toronto and is also a Certified Laban Movement Analyst. His interests include character animation, especially modeling expressive movement, nonverbal communication, gesture and applying performing arts knowledge to animation. Select distinctions include an NSF CAREER Award and the Alain Fournier Award.

Intended-Audience:
People with some knowledge of multimodal interaction techniques who want to obtain a broader understanding of the state of the art and also a quick entry to the embodied machines (Virtual Humans and Social Robots). People in Computer Animation, Computer Vision, Social Robotics or Virtual Humans modelling who are looking for what is the ongoing research and state of art, what are the domains that are still open, are most welcome.

Length: half day