About
As AI models continue to advance in complexity and sophistication, understanding how they work and make decisions is becoming increasingly challenging. This challenge has prompted a surge of research into developing methods and tools that can enhance the transparency and explainability of these models. Nowadays, there are many such methods available, to the point that their specific applications have become somewhat unclear.
This workshop will specifically explore the diverse applications of explainable artificial intelligence (XAI) methods in various areas. The areas will include, but not limited to XAI in Healthcare, Natural Science, Auditing, Fairness, Natural Language Processing and Law. By examining the use of XAI in these fields, the workshop will provide attendees with insights into the latest trends and challenges within the different domains.
The workshop discussions aim to delve into the latest advancements in applied XAI and devise ways to further progress the field. The objective is to foster an open and productive dialogue that enhances our understanding of the potential opportunities and constraints of XAI and its impact across different domains. The purpose of this discourse is to identify strategies that can extend the frontiers of applied XAI and make notable progress in this rapidly evolving area. Specifically, the workshop aims to:
Topics covered
- Examine various applications of XAI from the past and present
- Discuss potential applications of XAI in the future
- Identify the obstacles that hinder progress in each use case and how can we overcome them
- Explore the necessary methodological requirements for applying XAI
- Identify new domains where XAI can be useful in the future
- Understand the inherent limitations of XAI
- Explore whether insights gained from one use case can be transferred to other use cases
The workshop will provide a valuable learning opportunity for researchers, practitioners, and students seeking to apply XAI in their work, as it will feature presentations by experts in the field, as well as interactive discussions and insights into the latest trends and future directions in applied XAI. By bringing together a diverse group of participants with a shared interest in XAI, the workshop aims to foster collaboration, innovation, and knowledge sharing in this rapidly growing field.
Accepted Papers
A full list of accepted papers can be found here.
Oral Papers
Highlighted Reviewers
All Reviewers
Area Chairs
Dates
Note: all deadlines are in Anywhere on Earth (AoE).
Paper Submission
Submission deadline - September 22 October 2 (23:59 AoE), 2023
Author notification - October 20 October 27 (23:59 AoE), 2023
Camera ready deadline - November 22 (23:59 AoE), 2023
Workshop Event
Date: December 16, 2023
Schedule
Time | Event | Additional Information |
---|---|---|
8:50 - 9:00 AM | Opening Remarks | |
9:00 - 9:30 AM | IT1: Sameer Singh |
Title: Explanations: Let's talk about them! |
9:30 - 10:00 AM | IT2: Ulrike von Luxburg ※ |
Title: Theoretical guarantees for explainable AI? |
10:00 - 10:30 AM | Coffee Break & Interactive Games | |
10:30 - 11:00 AM | IT3: Su-In Lee | Title: Explainable AI, where we are and how to move forward for health AI. |
11:00 - 12:00 PM | Panel Discussion | Moderator: Kamalika Chaudhuri. Panelists: Shai Ben-David, Julius Adebayo, Sameer Singh, Su-In Lee, Leilani Gilpin. |
12:00 - 1:30 PM | Lunch | |
1:30 - 2:00 PM | IT4: Julius Adebayo |
Title: Confronting the Faithfulness Challenge with Post-hoc Model Explanations. |
2:00 - 3:00 PM | Poster Session 1 | |
3:00 - 3:30 PM | Coffee Break & Interactive Games | |
3:30 - 4:00 PM | IT5: Leilani Gilpin |
Title: Explaining Self-Driving Cars for Accountable Autonomy. |
4:00 - 4:30 PM | Contributed Talks | |
4:30 - 5:30 PM | Poster Session 2 |
※ Indicates virtual participation
All times are in Central Standard Time
Speakers
Julius Adebayo
Postdoctoral Fellow
Prescient Design
Leilani Gilpin
Assistant Professor
UC Santa Cruz
Su-In Lee
Professor
Paul G. Allen School of Computer Science & Engineering
Ulrike von Luxburg
Professor
University of Tübingen
Sameer Singh
Associate Professor
University of California, Irvine
Organisers
Chhavi Yadav
PhD student at UCSD, her interests lie in XAI, Secure Verification, Auditing and societal impacts of deep generative models
Michal Moshkovitz
Machine Learning Research Scientist at Bosch Research, she has been focused on developing the foundations of explainable machine learning
Bingqing Chen
Machine Learning Research Scientist at Bosch Research, her research lies at the intersection of machine learning and energy systems
Nave Frost
Research Scientist at eBay Research, his research interests focus on supplying explanations for data science applications
Suraj Srinivas
Postdoctoral research fellow at Harvard University, his research focuses on developing the foundations for interpretable machine learning
Valentyn Boreiko
PhD student at the University of Tübingen and a sabbatical student at Bosch Research, his research focuses on development of interpretability technique for vision classifiers
Hima Lakkaraju
Assistant Professor at Harvard University who focuses on the algorithmic and applied aspects of explainability, fairness, robustness, and privacy of machine learning models
Zico Kolter
Aassociate Professor at CMU, and chief scientist at Bosch Research, his work spans the intersection of machine learning and optimization
Dotan Di Castro
Research scientist and lab manager at Bosch Research, his research focuses on Reinforcement Learning and Computer Vision
Kamalika Chaudhuri
Associate Professor at UCSD and a Research Scientist at Meta AI, her research interests lie in the foundations of trustworthy machine learning
Contact information
- Email: appliedXAI.neurips2023 [AT] gmail.com
- Twitter: @XAI_in_Action