Final CfP: 2nd Workshop on Semantic Explainability (SemEx 2020), co-located with ISWC 2020

FINAL CALL FOR PAPERS

2nd Workshop on Semantic Explainability (SemEx 2020)

Co-located with the 19th International Semantic Web Conference (ISWC 2020)

** one week left*

**

** Although it is not necessary that authors register abstracts, it 
helps us with organizing the reviews. Therefore, please go ahead and 
submit your abstracts, soon. *


Venue: Virtual
Date: November 2 or 3, 2020
Website: http://www.semantic-explainability.com/

******************************************************

IMPORTANT DATES

Paper Submission Deadline: August 10, 2020
Notification of Acceptance: September 11, 2020
Deadline Camera-Ready: September 21, 2020
Workshop: November 2 or 3, 2020

******************************************************

OVERVIEW

In recent years, the explainability of complex systems such as decision 
support systems, automatic decision systems, machine 
learning-based/trained systems, and artificial intelligence in general 
has been expressed not only as a desired property, but also as a 
property that is required by law. For example, the General Data 
Protection Regulation’s (GDPR) „right to explanation“ demands that the 
results of ML/AI-based decisions are explained. The explainability of 
complex systems, especially of ML-based and AI-based systems, becomes 
increasingly relevant as more and more aspects of our lives are 
influenced by these systems‘ actions and decisions.

Several workshops address the problem of explainable AI. However, none 
of these workshops has a focus on semantic technologies such as 
ontologies and reasoning. We believe that semantic technologies and 
explainability coalesce in two ways. First, systems that are based on 
semantic technologies must be explainable like all other AI systems. In 
addition, semantic technologies seem predestined to support rendering 
systems that are not based on semantic technologies explainable.

Turning a system that already makes use of ontologies into an 
explainable system could be supported by the ontologies, as ideally the 
ontologies capture some aspects of the users‘ conceptualizations of a 
problem domain. However, how can such systems make use of these 
ontologies to generate explanations of actions they performed and 
decisions they took? Which criteria must an ontology fulfill so that it 
supports the generation of explanations? Do we have adequate ontologies 
that enable to express explanations and enable to model and reason about 
what is understandable or comprehensible for a certain user? What kind 
of lexicographic information is necessary to generate linguistic 
utterances? How to evaluate a system‘s understandability? How to design 
ontologies for system understandability? What are models of 
human-machine interaction where the system enables to interact with the 
system until the user understood a certain action or decision? How can 
explanatory components be reused with other systems that they have not 
been designed for?

Turning systems that are not yet based on ontologies but on sub-symbolic 
representations/distributed semantics such as deep learning-based 
approaches into explainable systems might be supported by the use of 
ontologies. Some efforts in this field have been referred to as 
neural-symbolic integration.

This workshop, the second workshop on semantic explainability, aims to 
bring together international experts interested in the application of 
semantic technologies for explainability of artificial 
intelligence/machine learning to stimulate research, engineering and 
evaluation – towards making machine decisions transparent, re-traceable, 
comprehensible, interpretable, explainable, and reproducible. Semantic 
technologies have the potential to play an important role in the field 
of explainability since they lend themselves very well to the task, as 
they enable to model users‘ conceptualizations of the problem domain. 
However, this field has so far only been only rarely explored.

******************************************************

TOPICS OF INTEREST

Topics of interest include, but are not limited to:

– Explainability of machine learning models based on semantics/ontologies
– Exploiting semantics/ontologies for explainable/traceable recommendations
– Explanations based on semantics/ontologies in the context of decision 
making/decision support systems
– Semantic user modelling for personalized explanations
– Design criteria for explainability-supporting ontologies
– Dialogue management and natural language generation based on 
semantics/ontologies
– Visual explanations based on semantics/ontologies
– Multi-modal explanations using semantics/ontologies
– Interactive/incremental explanations based on semantics/ontologies
– Ontological modeling of explanations and user profiles
– Real-world applications and use cases of semantic/ontologies for 
explanation generation
– Approaches to human expertise/knowledge capture for use in 
semantic/ontology based explanation generation

******************************************************

AUTHOR INSTRUCTIONS

We invite research papers and demonstration papers, either in long (16 
pages) or short (8 pages) format.

All papers have to be submitted electronically via EasyChair:
https://easychair.org/conferences/?conf=semex2020

All research submissions must be in English, and no longer than 16 pages 
for long papers, and 8 pages for short papers (including references).

Submissions must be in PDF, formatted in the style of the Springer 
Publications format for Lecture Notes in Computer Science (LNCS). For 
details on the LNCS style, see Springer’s Author Instructions: 
http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0

Accepted papers will be published as CEUR workshop proceedings. At least 
one author of each accepted paper must register for the workshop and 
present the paper there.

******************************************************

WORKSHOP ORGANIZERS

– Philipp Cimiano – Bielefeld University
– Basil Ell – Bielefeld University, Oslo University
– Agnieszka Lawrynowicz – Poznan University of Technology
– Laura Moss – University of Glasgow
– Axel-Cyrille Ngonga Ngomo – Paderborn University

******************************************************

PROGRAM COMMITTEE

Ahmet Soylu – Norwegian University of Science and Technology / SINTEF 
Digital, Norway
Andreas Harth – University of Erlangen–Nuremberg, Germany
Anisa Rula – University of Milano – Bicocca, Italy
Axel-Cyrille Ngonga Ngomo – Paderborn University, Germany
Axel Polleres – Wirtschaftsuniversität Wien, Austria
Basil Ell – Bielefeld University, Germany and University of Oslo, Norway
Benno Stein – Bauhaus-Universität Weimar, Germany
Christos Dimitrakakis – Chalmers University of Technology, Sweden
Ernesto Jimenez-Ruiz – The Alan Turing Institute, UK
Francesco Osborne – The Open University, UK
Gong Cheng – Nanjing University, China
Heiko Paulheim – University of Mannheim, Germany
Heiner Stuckenschmidt – University of Mannheim, Germany
Jürgen Ziegler – University of Duisburg-Essen, Germany
Mariano Rico – Universidad Politécnica de Madrid, Spain
Maribel Acosta – Karlsruhe Institute of Technology, Germany
Martin G. Skjæveland – University of Oslo, Norway
Michael Kohlhase – Friedrich-Alexander-Universität Erlangen-Nürnberg, 
Germany
Pascal Hitzler – Wright State University, USA
Philipp Cimiano – Bielefeld University, Germany
Ralf Schenkel – Trier University, Germany
Serena Villata – Université Côte d’Azur, CNRS, Inria, I3S, France
Stefan Schlobach – Vrije Universiteit Amsterdam, The Netherlands
Steffen Staab – University of Koblenz-Landau, Germany

-- 

Dr. Basil Ell
AG Semantic Computing
Bielefeld University
Bielefeld, Germany
CITEC, 2.311
+49 521 106 2951

Received on Monday, 3 August 2020 10:46:59 UTC