About xAI4Biometrics

The xAI4Biometrics Workshop at WACV 2022 aims at promoting a better understanding, through explainability and interpretability, of currently common and accepted practices in several and varied applications of biometrics. These applications, in scenarios comprising identity verification for access/border control, watching lists surveillance, anti-spoofing measures embedded in biometric recognition systems, forensic applications, among many others, affect the daily life of an ever-growing population. xAI4Biometrics is organised by INESC TEC and co-organised by the European Association for Biometrics (EAB).

Important Info

Where? At WACV 2022, in Waikoloa HI, USA
When? 4 January 2022 (Afternoon)

Abstract Submission: 4 October 2021
Paper Submission: 25 October 2021
Author Notification: 15 November 2021
Camera Ready & Registration (Firm deadline): 19 November 2021

Call for Papers

The xAI4Biometrics Workshop welcomes submissions that focus on biometrics and promote the development of: a) methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities; b) quantitative methods to objectively assess and compare different explanations of the automatic decisions; c) methods to generate better explanations; and d) more transparent algorithms.

Interest Topics

The xAI4Biometrics Workshop welcomes works that focus on biometrics and promote the development of:

  • Methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities;
  • Quantitative methods to objectively assess and compare different explanations of the automatic decisions;
  • Methods and metrics to study/evaluate the quality of explanations obtained by post-model approaches and improve the explanations;
  • Methods to generate model-agnostic explanations;
  • Transparency and fairness in AI algorithms avoiding bias;
  • Interpretable methods able to explain decisions of previously built and unconstrained (black-box) models;
  • Inherently interpretable (white-box) models;
  • Methods that use post-model explanations to improve the models’ training;
  • Methods to achieve/design inherently interpretable algorithms (rule-based, case-based reasoning, regularization methods);
  • Study on causal learning, causal discovery, causal reasoning, causal explanations, and causal inference;
  • Natural Language generation for explanatory models;
  • Methods for adversarial attacks detection, explanation and defense ("How can we interpret adversarial examples?");
  • Theoretical approaches of explainability (“What makes a good explanation?”);
  • Applications of all the above including proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes.

The workshop papers will be published in IEEE Xplore as WACV 2022 Workshops Proceedings and will be indexed separately from the main conference proceedings. The papers submitted to the workshop should follow the same formatting requirements as the main conference. For more details and templates, click here.

Don't forget to submit your paper by 25 October.