Joint Challenge on Compound Emotion Recognition and Multimodal (Audio, Facial and Gesture) based Emotion Recognition (CER&MMER)
Emotion recognition has a key role in affective computing. People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. We plan to organise a competition around two important problems which are: (1) recognition of compound emotions, that require, in addition to performing an effective visual analysis, to deal with recognition of micro emotions. The database includes 31250 facial faces with different emotions of 115 subjects whose gender distribution is almost uniform. (2) recognition of multi-modal emotions composed of three modalities, namely, facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female).
For the Multimodal (Audio, Facial and Gesture) based Emotion Recognition challenge please click here.
For the Compound Emotion Recognition challenge please click here.
Schedule:
-
- 5th Nov, 2019: Beginning of the quantitative competition, release of development and data;
- 22nd Feb, 2020: Deadline for code submission;
- 22nd Feb, 2020: Release of final evaluation data decryption key. Participants start predicting results on the final evaluation data;
- 23rd Feb, 2020: Contest paper submission deadline;
- 28th Feb, 2020: Camera ready of contest papers;
- 4th March, 2020: End of the quantitative competition. Deadline for submitting the predictions over the final evaluation data. The organisers start the code verification by running it on the final evaluation data;
- 9th March, 2020: Deadline for submitting the fact sheets;
- 11th March, 2020: Release of the verification results to the participants for review. Participants are invited to follow the paper submission guide for submitting contest papers;
- 18-22 May, 2020: FG 2020 – Joint Challenge on Compound Emotion recognition and Multimodal (Audio, Facial and Gesture) based Emotion Recognition (CER & MMER) results and presentations.
Team:
Dorota Kaminska. Scientist and educator at Institue of Mechatronics and Information systems.
E-mail: dorota.kaminska@p.lodz.pl
Tomasz Spanski. Ph.D. student at Institute of Mechatronics and Information Systems
E-mail: tomasz.spanski@p.lodz.pl
Kamall Nasrollahi. Associate professor at Visual Analysis of People Laboratory in Aalborg University in Denmark
E-mail: kn@create.aau.dk
Thomas B. Moeslund. Head of the Visual Analysis of People laboratory at Aalbort Univeristy
E-mail: tbm@create.aau.dk
Sergio Escalera. Associate professor at the Department of Mathematics and Informatics, Unversitat de Barcelona
E-mail: sergio@maia.ub.es
Cagri Ozcinar. Research fellow in School of Computer Science and Statistics, Trinity College Dublin
E-mail: ozcinar@scss.tcd.ie
Jüri Allik. Professor of Experimental Psychology at the University of Tartu
E-mail: juri.allik@ut.ee
Gholamreza Anbarjafari. Head of the iCV Lab in the Institute of Technology at the University of Tartu
E-mail: shb@ut.ee
Maris Popens. Research assistant in iCV Lab in the Institute of Technology at the University of Tartu
E-mail: maris@icv.tuit.ut.ee