4th WORKSHOP ON EXPLAINABLE & INTERPRETABLE ARTIFICIAL INTELLIGENCE FOR BIOMETRICS at ECCV 2024

September 2024 | Milan, Italy

Submit Your Papers!

About xAI4Biometrics

The xAI4Biometrics Workshop at ECCV 2024 intends to promote research on Explainable & Interpretable-AI to facilitate the implementation of AI/ML in the biometrics domain, and specifically to help facilitate transparency and trust. The xAI4Biometrics workshop welcomes submissions that focus on biometrics and promote the development of: a) methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities; b) quantitative methods to objectively assess and compare different explanations of the automatic decisions; c) methods to generate better explanations; and d) more transparent algorithms. xAI4Biometrics is organised by INESC TEC.

Important Info

Where? At ECCV 2024 - Milan, Italy
When? 30th of September, 2024 (Morning)

Full Paper Submission: 19th July 2024 (FIRM DEADLINE)
Author Notification: 12th August 2024
Camera Ready & Registration: 26th August 2024 (FIRM DEADLINE)

Keynote Speakers

Call for Papers

The xAI4Biometrics Workshop welcomes submissions that focus on biometrics and promote the development of: a) methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities; b) quantitative methods to objectively assess and compare different explanations of the automatic decisions; c) methods to generate better explanations; and d) more transparent algorithms.

Interest Topics

The xAI4Biometrics Workshop welcomes works that focus on biometrics and promote the development of:

  • Methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities;
  • Quantitative methods to objectively assess and compare different explanations of the automatic decisions;
  • Methods and metrics to study/evaluate the quality of explanations obtained by post-model approaches and improve the explanations;
  • Methods to generate model-agnostic explanations;
  • Transparency and fairness in AI algorithms avoiding bias;
  • Interpretable methods able to explain decisions of previously built and unconstrained (black-box) models;
  • Inherently interpretable (white-box) models;
  • Methods that use post-model explanations to improve the models’ training;
  • Methods to achieve/design inherently interpretable algorithms (rule-based, case-based reasoning, regularization methods);
  • Study on causal learning, causal discovery, causal reasoning, causal explanations, and causal inference;
  • Natural Language generation for explanatory models;
  • Methods for adversarial attacks detection, explanation and defense ("How can we interpret adversarial examples?");
  • Theoretical approaches of explainability (“What makes a good explanation?”);
  • Applications of all the above including proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes.

You can download the official call for papers here!
Don't forget to submit your paper by 19th of July 2024. All submitted papers should use the official ECCV 2024 template. For more guidelines please see the ECCV submission policies.

Programme

Organisers

GENERAL CHAIRS
Ana F. Sequeira INESC TEC
Jaime S. Cardoso FEUP & INESC TEC
Cynthia Rudin Duke University
Paulo Lobato Correia Instituto Superior Técnico
Peter Eisert Humboldt University & Fraunhofer HHI
PROGRAMME COMMITTEE
Adam Czajka University of Notre Dame
Fadi Boutros Fraunhofer Institute for Computer Graphics Research IGD
Giulia Orrù Università degli Studi di Cagliari
Hugo Proença Universidade da Beira Interior
João R. Pinto Vision-Box Portugal
Marta Gomez-Barrero Universität der Bundeswehr München
Meiling Fang Yangzhou University
Pedro Neto INESC TEC & FEUP
Wilson Silva Utrecht University & The Netherlands Cancer Institute
ORGANISING COMMITTEE
Alfonso Ortega De la Puente Universidad Autónoma de Madrid
Eduarda Caldeira INESC TEC & FEUP
Helena Montenegro INESC TEC & FEUP
Isabel Rio-Torto INESC TEC & FCUP
Kiran Raja NTNU
Leonardo Capozzi FEUP & INESC TEC
Marco Huber Fraunhofer IGD
Rafael Mamede INESC TEC & FEUP
Tiago Gonçalves INESC TEC & FEUP

Support

Thanks to those helping us make xAI4Biometrics an awesome workshop!
Want to join this list? Check our sponsorship opportunities (to be added shortly)!