[visionlist] ExFMA-2026 CfP - Explainability and Fairness in Multimedia Analysis - Deadline 20 April 2026
Chiara Galdi
Chiara.Galdi at eurecom.fr
Wed Apr 1 04:38:58 -05 2026
/Apologies for multiple posting. Please feel free to forward to whom may
be interested./
[ExFMA-2026] Explainability and Fairness in Multimedia Analysis
https://cbmi2026.sciencesconf.org/resource/page/id/18#exfma
Recent advances in machine learning, and in particular deep learning,
have led to remarkable performance gains in multimedia analysis tasks.
However, it has also raised questions about the reliability,
explicability, and fairness of their predictions for decision-making
(e.g., the black box problem of the deep models and the risk of biased
outcomes). This lack of transparency and potential unfairness raises
many ethical and political concerns that prevent wider adoption of this
potentially highly beneficial technology, especially when such systems
are deployed in high-stakes or socially sensitive domains.
Most multimedia applications, such as person detection/tracking, face
recognition, or lifelog analysis, involve sensitive personal
information. This raises both legal issues, such as data protection and
regulations in the ongoing European AI regulation, as well as ethical
concerns related to discrimination, demographic bias, and potential
misuse of these technologies.
These challenges are particularly acute in multimedia applications,
where models operate on high-dimensional, multimodal data, and where
predictions frequently rely on subtle semantic cues that are difficult
to interpret even for human experts. Biases may emerge from data
imbalance, annotation practices, model design, or deployment contexts,
and may disproportionately affect certain individuals or communities. It
is therefore crucial not only to understand how predictions correlate
with information perception and expert decision-making but also whether
they are equitable across groups and aligned with societal values. The
objective of eXplainable AI (XAI) and Fair AI is to improve
transparency, mitigate bias, and foster meaningful human
understanding of AI systems.
This special session focuses on methods and applications for explainable
and fair multimedia analysis, with an emphasis on explanations that are
faithful to the underlying models, meaningful to end users, actionable
for domain experts, and supportive of bias detection and mitigation. The
goal is to bring together researchers and practitioners working on
theoretical, methodological, and applied aspects of explainability,
fairness, evaluation, and interaction in multimedia AI systems.
Topics of interest include (but are not limited to):
* Analysis of the influencing factors relevant to the final decision
as an essential step to understand and improve the underlying
processes involved.
* Methods for bias detection, fairness assessment, and mitigation in
multimedia dataset and models.
* Fairness-aware learning strategies for multimedia analysis.
* Information visualization for models or their predictions.
* Visual analytics and Interactive applications for XAI.
* Performance evaluation metrics and protocols for explainability.
* Performance evaluation metrics and protocols for fairness.
* Sample-centric and dataset-centric explanations, including subgroup
analyses
* Attention mechanisms for XAI.
* XAI-based pruning.
* XAI for multimedia systems supporting domain experts (e.g.,
healthcare, security, cultural heritage).
* Open challenges from industry or existing and emerging regulatory
frameworks.
* Industrial use cases and deployment challenges.
The special session aims to collect high-quality scientific
contributions that advance the state of the art in explainable and fair
multimedia analysis, and to foster interdisciplinary discussion on how
transparency, fairness, and accountability can be jointly addressed in
multimedia AI systems. By integrating explainability and fairness, the
session seeks to promote trustworthy AI technologies that enhance
societal benefit while minimizing risks of bias, discrimination, and
unintended harm.
*Important dates :*
Paper deadline: 20 APRIL 2026
Notification: 22 MAY 2026
Camera-ready: 15 JUNE 2026
Paper submission :Author Guidelines
<https://cbmi2026.sciencesconf.org/resource/page/id/12>
Please indicate in the comments that this paper is for SS *ExFMA-2026*
*SS chairs*
* Chiara Galdi, EURECOM, Sophia Antipolis, France.
* Romain Bourqui, Université of Bordeaux
* Martin Winter, JOANNEUM RESEARCH - DIGITAL, Graz, Austria.
* Romain Giot, Université of Bordeaux
--
Cordiali saluti / Bien cordialement / Kind regards,
Chiara
--
Chiara GALDI, PhD
Assistant Professor
Dept. of Digital Security
EURECOM Campus SophiaTech
450 Route des Chappes
06410 Biot Sophia Antipolis
FRANCE
galdi at eurecom.fr
Phone : +33 (0)4 93.00.81.67
Fax : +33 (0)4 93.00.82.00
http://www.eurecom.fr/~galdi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20260401/1b695188/attachment-0001.html>
More information about the visionlist
mailing list