Leadership Corpus

The aim of this corpus is automatically detecting emergent leaders in a meeting environment. It contains 16 meeting sessions. The meeting sessions are composed of the same gender, unacquainted four-person (in total 44 females and 20 males) with average age of 21.6 with 2.24 standard deviation.

The experiment takes place inside a room as follows.

  • Four chairs are placed on the corners of a square area, facing each other
  • Behind each chair there is a camera (with a resolution of 1280x1024 pixels and frame rate of 20 frame per second) which faces the opposite chair and records only the person on it.
  • A standard camera (with a resolution of 1440x1080 pixels and frame rate of 25 frame per second, used only for data annotation) on a side of the room records the whole scene.
  • The group of participants is made of four people.
  • Every participant has a microphone connected to his/her corresponding camera. Audio was recorded with wireless lapel microphones, each one connected to person's corresponding frontal camera (audio sample rate=16 kHz).

An example set up of the meeting is as follows.

The participants performed one survival task, randomly chosen between two tasks: winter survival and desert survival which are most common tasks about small group decision making.

The corpus includes the following:

  • Videos with audio
  • A file including where video processing should start and end in terms of timestamp.
  • Audio files
  • Audio data after Speaker Diarisation (manually, synchronized with corresponding video) was applied
  • Questionnaires (SYMLOG and GLIS)
  • The most and the least emergent leader annotations
  • Designated leaders

 

When using this data for your research, please cite the following papers in your publication:

C. Beyan, N. Carissimi, S. Vascon, M. Bustreo, F. Capozzi, A. Pierro, C. Becchio, and V. Murino. Detecting emergent leader in a meeting environment using nonverbal visual features only. In 18th ACM International Conference on Multimodal Interaction (ICMI), 2016. pdf

C. Beyan, F. Capozzi, C. Becchio, and V. Murino. Identification of Emergent Leaders in a Meeting Scenario Using Multiple Kernel Learning. International Workshop on Advancements in Social Signal Processing for Multimodal Interaction (ASSP4MI) in conjunction with International Conference on Multimodal Interaction (ICMI), 2016. pdf

For more information please see the paper and/or contact the first author of the paper: Cigdem Beyan.

 

How to get the dataset

To obtain this dataset, we ask you to complete, sign and return the form below. After that, we will send you the credentials to download it. Note that the dataset is available only for RESEARCH purposes.

  • Fill out this form: Request Form [DOCX] [PDF]
  • Send it to: Pavistech (Note: you should send the email from an email address that is linked to your research institution/university)
  • Wait for the credentials
  • Download the dataset and the ground truth can be downloaded Here
  • PLEASE REMEMBER TO CITE OUR PAPER GIVEN ABOVE

 

References:

  • C. Beyan, N. Carissimi, S. Vascon, M. Bustreo, F. Capozzi, A. Pierro, C. Becchio, and V. Murino
    "Detecting emergent leader in a meeting environment using nonverbal visual features only"
    18th ACM International Conference on Multimodal Interaction (ICMI), 2016 [PDF]

  • C. Beyan, F. Capozzi, C. Becchio, and V. Murino
    "Identification of Emergent Leaders in a Meeting Scenario Using Multiple Kernel Learning"
    International Workshop on Advancements in Social Signal Processing for Multimodal Interaction (ASSP4MI) in conjunction with International Conference on Multimodal Interaction (ICMI), 2016