Workshops
We invite workshop proposals for the 2026 IEEE Conference on Automatic Face and Gesture Recognition (FG 2026) in Kyoto, Japan. Accepted workshops will be held on either May 25 or May 29, 2026, in the same venue as the FG 2026 main conference. Complementary to the main venue, we especially encourage workshop proposals relating to emerging new fields or new application domains of face and gesture analysis and synthesis.
Submission Procedure
Workshop proposals should include the following information:
- Workshop title
- Workshop motivation, expected outcomes, and impact
- List of organizers including affiliation, email address, and a short bio
- Tentative length of the workshop (half-day or full-day)
- Style of the workshop and related activities, e.g., poster/oral paper presentations, invited talks, round tables, competition, hackathon, etc
- Tentative paper submission and review schedule (Ideally, the camera-ready deadline will coincide with the main conference deadline on April 21)
- Planned advertisement, website hosting, potential sponsorships
- Paper submission procedure (submission website) if applicable
- Paper review procedure (single/double-blind, internal/external, solicited/invited-only, the pool of reviewers, etc.)
- Tentative program committee, and invited speakers, if any
- Estimated number of submissions and acceptance rate
Proposals should be submitted through CMT: https://cmt3.research.microsoft.com/FG2026
Contact
For additional information and queries regarding the workshop proposal procedure, please contact the Workshop Co-chairs: Norimichi Ukita and Koichiro Niinuma.
Important Dates (all AoE )
| Workshop proposals due: | ||
| Notification of acceptance: | ||
| Workshop date: | May 25 or 29, | 2026 |
3rd International Workshop on Synthetic Data for Face and Gesture Analysis (SD-FGA)
Organizers: Vitomir Struc, Xilin Chen, Fadi Boutros, Naser Damer, Deepak Kumar Jain
The landscape of computer vision and artificial intelligence is evolving rapidly with the rise of powerful generative models, transforming how data-driven challenges are addressed. Building on the success of the first and second editions of the Synthetic Data for Face and Gesture Analysis (SD-FGA) workshop, SD-FGA 2026 (at FG 2026) will spotlight advances in synthetic data generation-from GANs and VAEs to diffusion models and their growing impact on facial analysis, gesture recognition, and behavioral understanding, where privacy, ethics, and data diversity remain key concerns. This third edition expands the scope to synthetic data for attack generation and detection across both physical and behavioral dimensions, reflecting the urgent need for robust, secure AI in open-world settings. The workshop will bring together researchers and practitioners to examine how synthetic data can enable fair and reliable model development while also supporting the simulation, generation, and detection of adversarial and spoofing attacks, ultimately advancing secure, ethical, and scalable face and gesture analysis systems.
From Generation to Authentication: First Workshop on Trustworthy Face Avatars (TrustFA)
Organizers: Ammar Alsherfawi, Allam Shehata, Jianhang Zhou, Yasushi yagi, Bob Zhang, Ruben Tolosana, Luis Gomez, Laura Pedrouzo-Rodriguez, Ruben Vera-Rodriguez
TrustFA 2026 (From Generation to Authentication: First Workshop on Trustworthy Face Avatars) brings together researchers and practitioners working at the intersection of photo-realistic face avatar and robust authentication to make avatar technologies safer, more reliable, and more accountable. As face reenactment, 2D/3D avatar generation, and real-time facial animation rapidly mature and the raise of new concerns around impersonation, deepfakes, and privacy, this workshop will highlight advances in trustworthy generation, detection and verification, watermarking and traceability, secure capture and control, evaluation protocols and datasets, and emerging applications in telepresence, AR/VR, and human–computer interaction. The program will feature invited talks and contributed presentations, fostering discussion and new collaborations on building face avatar systems people can trust.
4th Workshop on learning with few or no annotated face, body and gesture data
Organizers: Maxime Devanne, Mohamed Daoudi, Stefano Berretti, Guido Borghi, Germain Forestier, Jonathan Weber
One of the main limitations of Deep Learning is that it requires large-scale annotated datasets to train efficient models. Gathering face, body or gesture data and annotating them can be very time consuming and laborious. This is particularly the case in areas where experts from the field are required, like in the medical domain. In such a case, using crowdsourcing may not be suitable, also due to privacy concerns and regulations. The goal of this 4th edition of the workshop is to explore approaches to overcome such limitations by investigating ways to learn from few annotated data, to transfer knowledge from similar domains or problems, to generate synthetic data, or to benefit from the community to gather novel large-scale annotated datasets.
Multimodal Foundation Models for 3D/4D Facial Expression Analysis and Synthesis (MFM-FE 2026)
Organizers: Muzammil Behzad, Yante Li, Guoying Zhao, Ajmal Mian, Xiaobai Li, Hui Yu, Zheng Lian
Facial expression analysis has long been central to understanding human affect, behavior, and communication. However, the emergence of foundation models, spanning vision, language, and multimodal learning, has transformed how subtle and dynamic facial behaviors can be modeled, interpreted, and generated. Traditional CNNor RNN-based approaches, while effective in constrained settings, struggle to generalize across identities, cultures, and real-world variability. In contrast, large-scale pre-trained multimodal architectures offer scalable, transferable, and interpretable representations for 3D/4D facial dynamics, micro- and macro-expression recognition, and textguided expression synthesis. This workshop aims to explore how multimodal and foundation model paradigms can advance facial expression research, thereby moving beyond static emotion recognition to dynamic, context-aware, and linguistically grounded understanding of human affect. It seeks to bring together researchers from affective computing, multimodal learning, behavioral signal processing, and generative modeling to define the next generation of human-centered AI for expressive behavior.
Facial Micro-Expression (FME) Workshop 2026: Pushing Boundaries in Temporal and Spatial Subtle Movement Analysis
Organizers: Adrian Davison, Xinqi Fan, Jingting Li, John See, Su-Jing Wang, Moi Hoon Yap
Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, typically found in a high-stakes environments. MEs are very short, generally being no more than 500 milliseconds and the data used is often very challenging with the limited number of labelled ME samples. It is also near impossible to unify the standardisation of ME labelling for different annotators. This workshop aims to explore advanced techniques of micro-facial expression analysis using a multimodal approach. We expect new advancements in multimodal micro-expression approaches, using the usual vision and temporal images alongside other metadata. In addition, the rise in large language models and visual language models will further push the boundaries of analysis and overall performance.
Empathic AI: Face, Gesture, and Accessibility Technologies (EmpAI 2026)
Organizers: Yutong Zhou, Von Ralph Dane Herbuela, Haifeng Zhang, Nobutaka Shimada, Mariza Ferro
EmpAI 2026 is the first international workshop dedicated to improving ““Empathic Intelligence””, AI that does more than identification, but understands and assists the diverse emotional, sensory, motor, and cognitive states of all humans. While face, gesture, and multimodal AI have achieved remarkable recognition performance, current systems remain limited when processing non-normative signals from disability, aging, and neurodiversity. We unify empathy, accessibility, and face-gesture research for the first time, aiming to redefine how AI interprets, responds to, and collaborates with diverse users. Bringing together communities across computer vision, HCI/HRI, multimodal LLMs, affective computing, accessibility technologies, and cognitive robotics, this workshop invites researchers, practitioners, and students to establish empathic intelligence as a new research direction, toward AI that not only perceives humans but also connects with, adapts to, and truly supports them.
1st Workshop on Behavior and Emotion Analysis through wearable Technology (BEAT)
Organizers: Louis Simon, Arianna De Vecchi, Cristina Palmero, Felix Dollack, Ting Dang, Mohamed Chetouani
Wearable movement and physiology sensors offer lightweight, non-invasive, and ecologically valid means to monitor human activity, affective state, and social behavior. With the rise of commercially deployed devices and new wearable foundation models, opportunities for scalable human behavior analysis continue to grow. The 1st Workshop on Behavioral and Emotion Analysis through wearable Technology (BEAT) aims to foster collaboration between researchers from various backgrounds (ML, HCI, biomedical engineering) around the topic of wearable devices for human behavioral analysis. Challenges such as resource efficiency, irregularly sampled data, multimodal fusion, and privacy-preserving AI will be addressed. We welcome contributions spanning various application domains, namely Affective Computing, Mobile Health, Action Recognition, Social Interaction, and HRI.