Social Signal Processing
We humans are social animals: we seek interaction with other humans and we accomplish it by producing and exchanging "social signals", which are a combination of one or more so called "behavioral cues". Behavioral cues can be categorized in verbal (e.g. what we say in a conversation) and nonverbal (e.g. face expressions, hand gestures, physical distance from other persons) behavioral cues, and they can be expressed by using our body or other means, like an emoticon in a chat message or an avatar in a virtual reality environment.
As a daring marriage between social and computer sciences, social signal processing is a relatively recent research domain which employs computational models to analyze and synthesize behavioral cues in human-human and human-machine interaction contexts.
Our current research focuses on the analysis of threatening behaviors for surveillance purposes, the study of the social dynamics that generate groups of people/crowd, the evolution of dyadic and group interactions, leadership and deceptive behaviors. In particular, we are interested in detection and modeling of nonverbal behaviors, which are categorized as: 1) physical appearance, 2) gestures and postures, 3) face and eye behavior, 4) vocal behavior and 5) space and environment. Among these categories, so far we have developed/applied several techniques regarding speaking activity, proxemics, gaze, head/body activity and 2D body pose by using unsupervised and supervised machine learning techniques. On the other hand, we have been working on mimicry and interaction synchrony to analyze small group social interactions using sequential data.