XAI in Action: Past, Present, and Future Applications

NeurIPS 2023 Workshop


About

As AI models continue to advance in complexity and sophistication, understanding how they work and make decisions is becoming increasingly challenging. This challenge has prompted a surge of research into developing methods and tools that can enhance the transparency and explainability of these models. Nowadays, there are many such methods available, to the point that their specific applications have become somewhat unclear.

This workshop will specifically explore the diverse applications of explainable artificial intelligence (XAI) methods in various areas. The areas will include, but not limited to XAI in Healthcare, Natural Science, Auditing, Fairness, Natural Language Processing and Law. By examining the use of XAI in these fields, the workshop will provide attendees with insights into the latest trends and challenges within the different domains.

The workshop discussions aim to delve into the latest advancements in applied XAI and devise ways to further progress the field. The objective is to foster an open and productive dialogue that enhances our understanding of the potential opportunities and constraints of XAI and its impact across different domains. The purpose of this discourse is to identify strategies that can extend the frontiers of applied XAI and make notable progress in this rapidly evolving area. Specifically, the workshop aims to:

Topics covered

  • Examine various applications of XAI from the past and present
  • Discuss potential applications of XAI in the future
  • Identify the obstacles that hinder progress in each use case and how can we overcome them
  • Explore the necessary methodological requirements for applying XAI
  • Identify new domains where XAI can be useful in the future
  • Understand the inherent limitations of XAI
  • Explore whether insights gained from one use case can be transferred to other use cases

The workshop will provide a valuable learning opportunity for researchers, practitioners, and students seeking to apply XAI in their work, as it will feature presentations by experts in the field, as well as interactive discussions and insights into the latest trends and future directions in applied XAI. By bringing together a diverse group of participants with a shared interest in XAI, the workshop aims to foster collaboration, innovation, and knowledge sharing in this rapidly growing field.

Dates

Note: all deadlines are in Anywhere on Earth.

Paper Submission

Submission deadline - September 22 October 2 (23:59 GMT), 2023
Author notification - October 20 October 25 (23:59 GMT), 2023
Camera ready deadline - November 10 (23:59 GMT), 2023

Workshop Event

Date: December 16, 2023

Schedule

To be announced.

Speakers

Organisers

Chhavi Yadav

Chhavi Yadav

PhD student at UCSD, her interests lie in XAI, Secure Verification, Auditing and societal impacts of deep generative models

Michal Moshkovitz

Michal Moshkovitz

Machine Learning Research Scientist at Bosch Research, she has been focused on developing the foundations of explainable machine learning

Bingqing Chen

Bingqing Chen

Machine Learning Research Scientist at Bosch Research, her research lies at the intersection of machine learning and energy systems

Nave Frost

Nave Frost

Research Scientist at eBay Research, his research interests focus on supplying explanations for data science applications

Suraj Srinivas

Suraj Srinivas

Postdoctoral research fellow at Harvard University, his research focuses on developing the foundations for interpretable machine learning

Valentyn Boreiko

Valentyn Boreiko

PhD student at the University of Tübingen and a sabbatical student at Bosch Research, his research focuses on development of interpretability technique for vision classifiers

Hima Lakkaraju

Hima Lakkaraju

Assistant Professor at Harvard University who focuses on the algorithmic and applied aspects of explainability, fairness, robustness, and privacy of machine learning models

Zico Kolter

Zico Kolter

Aassociate Professor at CMU, and chief scientist at Bosch Research, his work spans the intersection of machine learning and optimization

Dotan Di Castro

Dotan Di Castro

Research scientist and lab manager at Bosch Research, his research focuses on Reinforcement Learning and Computer Vision

Kamalika Chaudhuri

Kamalika Chaudhuri

Associate Professor at UCSD and a Research Scientist at Meta AI, her research interests lie in the foundations of trustworthy machine learning

Contact information

  • Email: appliedXAI.neurips2023 [AT] gmail.com
  • Twitter: @XAI_in_Action