3rd WORKSHOP ON EXPLAINABLE & INTERPRETABLE ARTIFICIAL INTELLIGENCE FOR BIOMETRICS at WACV 2023

January 2023, Online

Submit Your Papers!

About xAI4Biometrics

The xAI4Biometrics Workshop at WACV 2023 intends to promote research on Explainable & Interpretable-AI to facilitate the implementation of AI/ML in the biometrics domain, and specifically to help facilitate transparency and trust. The xAI4Biometrics workshop welcomes submissions that focus on biometrics and promote the development of: a) methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities; b) quantitative methods to objectively assess and compare different explanations of the automatic decisions; c) methods to generate better explanations; and d) more transparent algorithms. xAI4Biometrics is organised by INESC TEC.

The xAI4Biometrics Workshop will be run remotely. We will do our best to guarantee that the workshop runs smoothly. We expect to be able to repeat the good experience that we had in WACV 2021 and 2022 when everything was remote and the feedback we received from the participants was very positive. Each workshop will have its own zoom channel and all the talks (keynote and paper presentations) will be given through that platform. If you have any question please do not hesitate in contacting us.

Important Info

Where? At WACV 2023, Online
When? January 2023

Full Paper Submission: 11 October 2022 25 October 2022
Author Notification: 11 November 2022
Camera Ready & Registration: 19 November 2022

Keynote Speakers

Speakers will be announced soon.

Call for Papers

The xAI4Biometrics Workshop welcomes submissions that focus on biometrics and promote the development of: a) methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities; b) quantitative methods to objectively assess and compare different explanations of the automatic decisions; c) methods to generate better explanations; and d) more transparent algorithms.

Interest Topics

The xAI4Biometrics Workshop welcomes works that focus on biometrics and promote the development of:

  • Methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities;
  • Quantitative methods to objectively assess and compare different explanations of the automatic decisions;
  • Methods and metrics to study/evaluate the quality of explanations obtained by post-model approaches and improve the explanations;
  • Methods to generate model-agnostic explanations;
  • Transparency and fairness in AI algorithms avoiding bias;
  • Interpretable methods able to explain decisions of previously built and unconstrained (black-box) models;
  • Inherently interpretable (white-box) models;
  • Methods that use post-model explanations to improve the models’ training;
  • Methods to achieve/design inherently interpretable algorithms (rule-based, case-based reasoning, regularization methods);
  • Study on causal learning, causal discovery, causal reasoning, causal explanations, and causal inference;
  • Natural Language generation for explanatory models;
  • Methods for adversarial attacks detection, explanation and defense ("How can we interpret adversarial examples?");
  • Theoretical approaches of explainability (“What makes a good explanation?”);
  • Applications of all the above including proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes.

The workshop papers will be published in IEEE Xplore as part of the WACV 2023 Workshops Proceedings and will be indexed separately from the main conference proceedings. The papers submitted to the workshop should follow the same formatting requirements as the main conference. For more details and templates, click here.

Don't forget to submit your paper by 11 October 2022 25 October 2022.

Programme

The programme will be announced soon.

Organisers

GENERAL CHAIRS
Jaime S. Cardoso FEUP & INESC TEC
Ana F. Sequeira INESC TEC
Cynthia Rudin Duke University
Peter Eisert Humboldt University & Fraunhofer HHI
PROGRAMME COMMITTEE
Hugo Proença Universidade da Beira Interior
Adam Czajka University of Notre Dame
Christoph Busch NTNU
João R. Pinto Bosch Portugal & FEUP
PUBLICITY CHAIRS
Sara Oliveira INESC TEC & FEUP
Isabel Rio-Torto INESC TEC & FCUP
Tiago Gonçalves INESC TEC & FEUP
Pedro Neto INESC TEC & FEUP

Support

Thanks to those helping us make xAI4Biometrics an awesome workshop!
Want to join this list? Check our sponsorship opportunities here!