Special Session and Panel Proposals
The 20th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2026) will be held in Kyoto, Japan, from May 25-29, 2026. FG is the premier international forum for research in image and video-based face, gesture, and body movement recognition.
What is a special session in FG 2026
A special session focuses on a specific topic within the broader domain of face and gesture recognition. Unlike a workshop, a special session is fully integrated into the main conference program. Papers submitted to a special session will be included in the conference proceedings and undergo the same rigorous peer-review process as regular submissions. All papers must present original research and adhere to the general FG submission guidelines.
Upon acceptance of a special session proposal, a dedicated Subject Area will be created in the paper submission system (CMT). The special session organizers will serve as Area Chairs and oversee the review process for submissions within their topic. If enough papers from the special session are accepted, an oral session will be scheduled during FG 2026, and the organizers will be invited to chair that session.
Proposals may address any topic within the broader area of face and gesture recognition, modeling, and analysis. We particularly encourage submissions that explore emerging research directions, novel application domains, and new challenges related to face and gesture. Interdisciplinary topics that bring new perspectives to the FG community are also highly welcome. Proposers should demonstrate a strong track record in the proposed field.
What is a panel in FG 2026
A panel is a focused discussion session that brings together experts to debate and explore timely, controversial, or emerging topics in the field of face and gesture recognition. Unlike a special session, a panel does not involve paper submissions or formal proceedings. Instead, it provides a dynamic forum for interactive dialogue, diverse viewpoints, and thought-provoking exchanges between panelists and the audience. Panel sessions aim to stimulate new ideas, address key challenges, and foster active community engagement.
Proposal Submission Guidelines
Proposals should be submitted to the Special Session and Panel Chairs with the subject field “FG 2026 Proposal: [title of session]”.
A proposal must include the following information:
- The title of the proposed Special Session or Panel.
- A brief description of the topic, including how it stands apart from the regular FG topics/sessions.
- Contact information and short bio of the organizers.
- For a Special Session: a list of proposed contributions to the special session (including authors, title, and short abstract).
- For a Panel: a list of proposed panelists and their expected perspectives/contributions.
Important Dates (all AoE )
| Special Session | ||
| Proposals due | ||
| Proposal notification | ||
| Panel | ||
| Proposals due | March 10, | 2026 |
| Proposal notification | March 20, | 2026 |
All proposals should be sent to the FG 2026 Special Session & Panel Chairs:
Shiqi Yu and Matteo Ferrara
We look forward to your proposals and contributions to FG 2026!
Special Session 1: Foundation & Generative Models for Face and Gesture Recognition
Organizers: Hatef Otroshi Shahreza (Idiap Research Institute, Switzerland), Arun Ross(Michigan State University, USA), and Sebastien Marcel (Idiap Research Institute, Switzerland; University of Lausanne, Switzerland).
Recent developments in foundation and generative models have revolutionized AI, creating enormous opportunities in different fields, including face and gesture recognition. Foundation models (such as CLIP, GPT, etc.) enable robust feature extraction and transfer learning. In addition, generative models allow synthetic data generation, privacy-preserving learning, and advanced data augmentation techniques. Foundation and generative models are reshaping the field by improving accuracy, robustness, and interpretability in automatic face and gesture recognition. This special session aims to bring together researchers to discuss state-of-the-art advancements, applications, and challenges at the applications of foundation and generative models for face and gesture recognition. The special session will foster discussions that inspire innovation and address challenges in real-world applications of these advanced models.
Topics of interest include but are not limited to:
- Foundation and Generative Models in:
- Face and Gesture Recognition; Face Analysis
- Body Action and Activity Recognition; Gesture Recognition and Analysis
- Affective Computing and Multi-modal Interaction
- Psychological and Behavioral Analysis
- Template Security
- Generating Synthetic Datasets for Face, Body and Gesture
- Privacy and Ethical Aspects of Applying Foundation Models to Face and Gesture Analysis, such as
- Unlearning in foundation and generative models
- Safety of foundation models
Special Session 2: SMILE: Silent Motion Interpretation and Lip-based Evaluation
Organizers: Georgia Fargetta (University of Catania, Italy), Massimo Orazio Spata (University of Catania, Italy), Giulia Orru (University of Cagliari, Italy), and Alessandro Ortis (University of Catania, Italy).
The session aims to consolidate research efforts in lip-based communication analysis, visual speech recognition, silent interaction, and lip-motion-driven biometrics. SMILE is closely aligned with the Silent Lipreading Competition 2026, creating a unified venue for presenting benchmark results, novel methodologies, multimodal evaluation protocols, and emerging applications in silent communication understanding. By fostering contributions related to lip reading, lip-driven gesture analysis, multimodal accessibility, and facial motion–based deepfake generation or detection, the session seeks to advance the study of silent human communication and its role in assistive technologies, privacy-preserving AI, and human-centered machine perception.
We invite submissions addressing, but not limited to, the following topics:
- Silent and audio-visual lip reading;
- Visual speech recognition and lip-based expression understanding;
- Lip-motion-based biometric identification and authentication;
- Sign language recognition integrating lip motion cues;
- Evaluation methodologies, benchmark protocols, and dataset bias analysis;
- Generative, self-supervised, and cross-modal learning for lip features;
- Robust lip reading under occlusion, pose, and illumination variations;
- Human-centered and assistive AI applications;
- Deepfake reenactment generation and detection based on lip motion and facial dynamics;
- 3D modelling and reconstruction of lip, face, and head motion.
Submission instruction
When you create a submission of Round 2 at the submission system CMT , you can choose your preferred special session from the Subject Areas.