As AI models continue to advance in complexity and sophistication, understanding how they work and make decisions is becoming increasingly challenging. This challenge has prompted a surge of research into developing methods and tools that can enhance the transparency and explainability of these models. Nowadays, there are many such methods available, to the point that their specific applications have become somewhat unclear.
This workshop will specifically explore the diverse applications of explainable artificial intelligence (XAI) methods in various areas. The areas will include, but not limited to XAI in Healthcare, Natural Science, Auditing, Fairness, Natural Language Processing and Law. By examining the use of XAI in these fields, the workshop will provide attendees with insights into the latest trends and challenges within the different domains.
The workshop discussions aim to delve into the latest advancements in applied XAI and devise ways to further progress the field. The objective is to foster an open and productive dialogue that enhances our understanding of the potential opportunities and constraints of XAI and its impact across different domains. The purpose of this discourse is to identify strategies that can extend the frontiers of applied XAI and make notable progress in this rapidly evolving area. Specifically, the workshop aims to:
- Examine various applications of XAI from the past and present
- Discuss potential applications of XAI in the future
- Identify the obstacles that hinder progress in each use case and how can we overcome them
- Explore the necessary methodological requirements for applying XAI
- Identify new domains where XAI can be useful in the future
- Understand the inherent limitations of XAI
- Explore whether insights gained from one use case can be transferred to other use cases
The workshop will provide a valuable learning opportunity for researchers, practitioners, and students seeking to apply XAI in their work, as it will feature presentations by experts in the field, as well as interactive discussions and insights into the latest trends and future directions in applied XAI. By bringing together a diverse group of participants with a shared interest in XAI, the workshop aims to foster collaboration, innovation, and knowledge sharing in this rapidly growing field.
Note: all deadlines are in Anywhere on Earth.
Submission deadline -
September 22 October 2 (23:59 GMT), 2023
Author notification -
October 20 October 25 (23:59 GMT), 2023
Camera ready deadline - November 10 (23:59 GMT), 2023
Date: December 16, 2023
To be announced.
UC Santa Cruz
Paul G. Allen School of Computer Science & Engineering
Ulrike von Luxburg
University of Tübingen
University of California, Irvine
Distinguished research scientist and senior manager
PhD student at UCSD, her interests lie in XAI, Secure Verification, Auditing and societal impacts of deep generative models
Machine Learning Research Scientist at Bosch Research, she has been focused on developing the foundations of explainable machine learning
Machine Learning Research Scientist at Bosch Research, her research lies at the intersection of machine learning and energy systems
Research Scientist at eBay Research, his research interests focus on supplying explanations for data science applications
Postdoctoral research fellow at Harvard University, his research focuses on developing the foundations for interpretable machine learning
PhD student at the University of Tübingen and a sabbatical student at Bosch Research, his research focuses on development of interpretability technique for vision classifiers
Assistant Professor at Harvard University who focuses on the algorithmic and applied aspects of explainability, fairness, robustness, and privacy of machine learning models
Aassociate Professor at CMU, and chief scientist at Bosch Research, his work spans the intersection of machine learning and optimization
Dotan Di Castro
Research scientist and lab manager at Bosch Research, his research focuses on Reinforcement Learning and Computer Vision
Associate Professor at UCSD and a Research Scientist at Meta AI, her research interests lie in the foundations of trustworthy machine learning
- Email: appliedXAI.neurips2023 [AT] gmail.com
- Twitter: @XAI_in_Action