CFP: ECAI 2024 - Workshop "Multimodal, Affective and Interactive eXplainable Artificial Intelligence" (MAI-XAI 24)

* Apologies for cross-postings *

Dear Colleagues,

We are organizing the 1st Workshop on "Multimodal, Affective and Interactive eXplainable AI" (MAI-XAI 24) to be held as part of the next ECAI 2024 conference, October 19-20, 2024, Santiago de Compostela, Spain. 

All details about the workshop are at:
https://sites.google.com/view/mai-xai24/home

The aim of this workshop is to offer an opportunity for researchers and practitioners to identify new promising research directions on XAI and to provide a forum to disseminate and discuss, with special attention to multimodal, affective and interactive XAI. 

The workshop is made up of three tracks:

Multimodal XAI is concerned with building and validating multi-modal resources that contribute to the generation and evaluation of effective multi-modal explanations. Attendants are encouraged to present case studies in real-world applications where XAI has been successfully applied, emphasizing the practical benefits and challenges encountered.
Affective XAI concerns challenges, opportunities and solutions for applying explainable machine learning algorithms in affective computing (known also as artificial emotional intelligence), and refers to machine systems that sense, recognize, respond to and influence emotions.
Interactive XAI poses the question of how to achieve, improve and measure users’ understanding and ability to operate effectively at the center of the XAI process as a basis to dynamically and interactively adapt the explanation to users’ needs and level of understanding.

The topics of interest include (but are not limited to):

MULTIMODAL XAI
XAI for Multi-modal Data Retrieval, Collection, Augmentation, Generation and Validation: from data explainability to understanding and mitigating data bias.
XAI for Human-Computer Interaction (HCI): from Explanatory User Interfaces to interactive and interpretable machine learning approaches with human-in-the-loop.
Augmented Reality for Multi-modal XAI.
XAI approaches leveraging application-specific domain knowledge: from concepts to large knowledge repositories (ontologies) and corpus.
Design and Validation of Multi-modal explainers: from endowing explainable models with multi-modal explanation interfaces to measuring model explainability and evaluating quality of XAI systems.
Quantifying XAI: from defining metrics and methodologies to assess the effectiveness of explanations in enhancing user understanding and trust.
Large knowledge bases and graphs that can be used for Multi-modal Explanation generation.
Large language models and their generative power for Multi-modal XAI.
Proof-of-concepts and demonstrators of how to integrate effective and efficient XAI into real-world human decision-making processes.
Ethical, Legal, Socio-Economic, and Cultural (ELSEC) Considerations in XAI: Examining ethical implications surrounding the use of high-risk AI applications, including potential biases and the responsible deployment of sustainable “green” AI in sensitive domains.

AFFECTIVE XAI
Explainable Affective Computing in Healthcare, Psychology and Physiology
Explainable Affective Computing in Education, Entertainment and Gaming
Privacy, Fairness and Ethical considerations in Affective Computing and Explainable AI applied in Affective Computing
Bias in Affective Computing and Explainable AI applied in Affective Computing
Multimodal (textual, visual, vocal, physiological) Emotion Recognition Systems
User environments for the design of systems to better detect and classify affect
Sentiment Analysis and Explainability
Social Robots and Explainability
Emotion Aware Recommender Systems
Accuracy in Emotion Recognition and Explainable AI applied in Affective Computing
Affective Design
Machine learning using biometric data to classify biosignals
Virtual Reality in Affective Computing
Human-computer interaction (HCI) and human in the loop (HITL) approaches in Affective Computing

INTERACTIVE XAI
Dialogue-based approaches to XAI
Use of multiple modalities in XAI systems
Approaches to dynamically adapt explainability in interaction with a user
XAI approaches that use a model of the partner to adapt explanations
Methods to measure and evaluate the understanding of the users of a model
Methods to measure and evaluate the ability to use models effectively in downstream tasks
Interactive methods by which a system and a user can negotiate what is to be explained
Modelling the social functions and aspects of an explanation
Methods to identify a user’s information and explainability needs

Papers submitted to the ECAI main track and are under review for the conference cannot be submitted to the workshop. If rejected from ECAI, authors can submit a request for their paper to be considered for the workshop by 11 July 2024 to one of the emails from the Contact organizers. Notification on those submissions will be sent by 18 July 2024.

Accepted manuscripts for the XAI-MAI 24 workshop will be published in CEUR Workshop Proceedings (CEUR-WS.org). Papers must be written in English, be prepared for double-blind review using the CEUR-WS template <https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/>. The following types of submissions are allowed:
Regular/Long Papers (10 - 15 pages): describing substantial/mature work 
Short Papers (5 - 9 pages): describing work in progress, a demonstration, system etc. 

Submissions are made through the Easychair website (https://easychair.org/my/conference?conf=maixai24). 

Registering an abstract of your paper (of around 100-300 words in plain text) is required in advance of the paper submission deadline and you will be asked to provide additional information (such as keywords) at that time.

The workshop is planned as an in-person event. Each accepted paper will get assigned either an oral presentation slot or a combined poster/spotlight presentation slot.

Important Dates (all times are 23:59 Anywhere on Earth, UTC-12):
Abstract submission: May 8th, 2024
Paper submission: May 15th, 2024
Acceptance/rejection notification: July 1st, 2024
Camera-ready paper submission: July 26th, 2024
Conference dates: October 19-24, 2024 (The MAI-XAI 24 Workshop will take place by October 19-20, 2024) 

Organizing Committee:
Jose M. Alonso-Moral, Universidade de Santiago de Compostela, CiTIUS-USC, Spain 
Zach Anthis, Neapolis University Paphos, Cyprus
Rafael Berlanga, Universitat Jaume I (UJI), Spain
Alejandro Catalá, Universidade de Santiago de Compostela, CiTIUS-USC, Spain
Philipp Cimiano, Bielefeld University, Germany
Peter Flach, University of Bristol, UK
Eyke Hüllermeier, LMU Munich, Germany
Tim Miller, University of Queensland, Australia
Oana Mitruț, National University of Science and Technology POLITEHNICA of Bucharest, Romania
Gabriela Moise, Petroleum-Gas University of Ploiesti, Romania
Alin Moldoveanu, National University of Science and Technology POLITEHNICA of Bucharest, Romania
Florica Moldoveanu, National University of Science and Technology POLITEHNICA of Bucharest, Romania
Kacper Sokol, ETH Zurich, Switzerland
Aitor Soroa, Universidad del País Vasco, HiTZ, Spain

Contact details: 
Jose M. Alonso-Moral, https://citius.gal/es/team/jose-maria-alonso-moral  
Philipp Cimiano, https://ekvv.uni-bielefeld.de/pers_publ/publ/PersonDetail.jsp?personId=15020699&lang=EN 
Zach Anthis, https://www.nup.ac.cy/faculty/zach-anthis/ 
Oana Mitruț, oana.balan@upb.ro <mailto:oana.balan@upb.ro> 


Prof. Dr. Philipp Cimiano
AG Semantic Computing
Coordinator of the Cognitive Interaction Technology Center (CITEC)
Co-Director of the Joint Artificial Intelligence Institute (JAII)
Universität Bielefeld

Tel: +49 521 106 12249
Fax: +49 521 106 6560
Mail: cimiano@cit-ec.uni-bielefeld.de
Personal Zoom Room: https://uni-bielefeld.zoom-x.de/my/pcimiano

Office CITEC-2.307
Universitätsstr. 21-25
33615 Bielefeld, NRW
Germany

Received on Monday, 8 April 2024 08:20:42 UTC