Facial Micro-Expression (FME) Workshop and Challenge 2021

- Advanced techniques for Facial Expressions Generation and Spotting

Update: Download Call for Papers (pdf version).

Micro-facial expressions (MEs) are involuntary movements of the face that occur spontaneously in a high-stakes environment. Computational analysis and automation of tasks on micro-expressions is an emerging area in face research, with a strong interest appearing from 2014. Only recently, the availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to advance further from the computational aspect. Particularly comprehensive are two state-of-the art FACS coded datasets: CASME II and SAMM. While much research has been done on these datasets individually, there has been no attempts to introduce a more rigorous and realistic evaluation to work done in this domain. This is the inaugural workshop in this area of research, with the aim of promoting interactions between researchers and scholars from within this niche area of research, and also including those from broader, general areas of expression and psychology research.

Agenda

This workshop has two main agenda:

  1. To solicit original works that address a variety of challenges of Facial Expressions research, but not limited to:
    • Facial expressions (both micro- and macro-expressions) detection/spotting
    • Facial expressions recognition
    • Multi-modal micro-expression analysis, combining such as depth information, heart rate signal etc.
    • FME feature representation and computational analysis
    • Unified FME spot-and-recognize schemes
    • Deep learning techniques for FMEs detection and recognition
    • New objective classes for FMEs analysis
    • New FMEs datasets Facial expressions data synthesis
    • Psychology of FMEs research
    • Facial Action Unit (AU) detection and recognition
    • Emotion recognition using Aus
    • FME Applications
  2. To organize a Facial Micro-Expression (FME) Challenge for facial micro-expression research, involving FME generation and spotting.

Description of Challenge tasks

Databases

These are three state-of-the-art datasets for ME recognition task: the Chinese Academy of Sciences Micro-Expression Database II (CASME II) with 247 FMEs at 200 fps, SMIC-E with 157 FMEs at 100 fps and Spontaneous Facial Micro-Movement Dataset (SAMM) with 159 FMEs at 200 fps.

Moreover, researchers begin to put efforts on the databases which contain long video sequences: the SAMM Long Videos with 147 long videos at 200 fps (average duration: 35.5s), the CAS(ME)2 with 97 long videos at 30 fps (average duration: 148s). In this challenge, we use CAS(ME)2 and SAMM Long Videos for the task of micro- and macro-expression spotting.

Facial micro-expression generation task:

The goal of this task is to generate specific micro-expression (source) on the given template faces (target).

  • Guidelines: download file here
  • Source and Target samples: file here
  • . Please apply to download the complete databases according to the description in the guideline document.
  • As the submission results will be evaluated by three experts, there are no baseline method and result for the generation task.

  • Facial macro- and micro-expression spotting task

  • Guidelines: download file here
  • Baseline method:
    Please cite:
    Yap, C.H., Yap, M.H., Davison, A.K., Cunningham, R. (2021), Efficient Lightweight 3D-CNN using Frame Skipping and Contrast Enhancement for Facial Macro- and Micro-expression Spotting, arXiv:2105.06340 [cs.CV], https://arxiv.org/abs/2105.06340.
  • Baseline code: As the paper of the baseline method is currently under reviewed, the code will be shared when the paper is accepted.
  • Baseline results:
  •   MaE ME Combined
    SAMM-LV 0.1863 0.0409 0.1193
    CAS(ME)2-cropped (for final evaluation) 0.0401 0.0118 0.0304
    CAS(ME)2 (could be used as reference) 0.0686 0.0119 0.0497
  • Frequently Asked Questions:
    1. Q: How to deal with the spotted intervals with overlap?
      A: We consider that each ground-truth interval corresponds to at most one single spotted interval. If your algorithm detects multiple with overlap, you should merge them into an optimal interval. The fusion method is also part of your algorithm, and the final result evaluation only cares about the optimal interval obtained.
    2. Q: How to validate the algorithm and obtain the final result when using machine learning / deep learning methods? Will there be a test set release?
      A: As the data size for micro-expression is small, all the samples from CAS(ME)2-cropped and SAMM-LV should be used for result evaluation. And the common validation method is Leave-one-Subject-out cross validation.