CFP - SemEx 2019 (Abstract deadline: June 21): First Workshop on Semantic Explainability @ ISWC 2019

This is a kind reminder - the abstract deadline for the Workshop on 
Semantic Explainability is approaching (June 21).

On 31.05.19 14:29, Basil Ell wrote:
>
> ------------------------------------------------------------------------------------------------------------------------------ 
>
> Call For Research Papers
> ------------------------------------------------------------------------------------------------------------------------------ 
>
>
> 1st Workshop on Semantic Explainability (SemEx 2019) - 
> http://www.semantic-explainability.com/
> co-located with The 18th International Semantic Web Conference (ISWC 
> 2019)
> October 26 – 30, 2019 The University of Auckland, New Zealand
>
> Dates
>
> – Abstract: June 21, 2019
> – Submission: June 28, 2019
> – Notification: July 24, 2019
> – Camera-ready: August 16, 2019
> – Workshop: October 26 or 27, 2019
>
> We are very pleased to accounce that we'll have an invited talk given 
> by Dr. Freddy Lecue.
> Dr. Freddy Lecue is the Chief Artificial Intelligence (AI) Scientist 
> at CortAIx (Centre of Research & Technology in Artificial Intelligence 
> eXpertise) at Thales in Montreal – Canada. He is also a research 
> associate at INRIA, in WIMMICS team, Sophia Antipolis – France. His 
> research team is working at the frontier of learning and reasoning 
> systems, with a strong interest in Explainable AI i.e., AI systems, 
> models and results which can be explained to human and business 
> experts cf. recent research / industry presentation.
>
> ------------------------------------------------------------------------------------------------------------------------------ 
>
> Overview
> ------------------------------------------------------------------------------------------------------------------------------ 
>
> In recent years, the explainability of complex systems such as 
> decision support systems, automatic decision systems, machine 
> learning-based/trained systems, and artificial intelligence in general 
> has been expressed not only as a desired property, but also as a 
> property that is required by law. For example, the General Data 
> Protection Regulation’s (GDPR) „right to explanation“ demands that the 
> results of ML/AI-based decisions are explained. The explainability of 
> complex systems, especially of ML-based and AI-based systems, becomes 
> increasingly relevant as more and more aspects of our lives are 
> influenced by these systems‘ actions and decisions.
>
> Several workshops address the problem of explainable AI. However, none 
> of these workshops has a focus on semantic technologies such as 
> ontologies and reasoning. We believe that semantic technologies and 
> explainability coalesce in two ways. First, systems that are based on 
> semantic technologies must be explainable like all other AI systems. 
> In addition, semantic technologies seem predestined to support 
> rendering systems that are not based on semantic technologies 
> explainable.
>
> Turning a system that already makes use of ontologies into an 
> explainable system could be supported by the ontologies, as ideally 
> the ontologies capture some aspects of the users‘ conceptualizations 
> of a problem domain. However, how can such systems make use of these 
> ontologies to generate explanations of actions they performed and 
> decisions they took? Which criteria must an ontology fulfill so that 
> it supports the generation of explanations? Do we have adequate 
> ontologies that enable to express explanations and enable to model and 
> reason about what is understandable or comprehensible for a certain 
> user? What kind of lexicographic information is necessary to generate 
> linguistic utterances? How to evaluate a system‘s understandability? 
> How to design ontologies for system understandability? What are models 
> of human-machine interaction where the system enables to interact with 
> the system until the user understood a certain action or decision? How 
> can explanatory components be reused with other systems that they have 
> not been designed for?
>
> Turning systems that are not yet based on ontologies but on 
> sub-symbolic representations/distributed semantics such as deep 
> learning-based approaches into explainable systems might be supported 
> by the use of ontologies. Some efforts in this field have been 
> referred to as neural-symbolic integration.
>
> This workshop aims to bring together international experts interested 
> in the application of semantic technologies for explainability of 
> artificial intelligence/machine learning to stimulate research, 
> engineering and evaluation – towards making machine decisions 
> transparent, re-traceable, comprehensible, interpretable, explainable, 
> and reproducible. Semantic technologies have the potential to play an 
> important role in the field of explainability since they lend 
> themselves very well to the task, as they enable to model users‘ 
> conceptualizations of the problem domain. However, this field has so 
> far only been only rarely explored.
>
> ------------------------------------------------------------------------------------------------------------------------------- 
>
> Topics of Interest
> ------------------------------------------------------------------------------------------------------------------------------- 
>
>
> Topics of interest include, but are not limited to:
>
> – Explainability of machine learning models based on semantics/ontologies
> – Exploiting semantics/ontologies for explainable/traceable 
> recommendations
> – Explanations based on semantics/ontologies in the context of 
> decision making/decision support systems
> – Semantic user modelling for personalized explanations
> – Design criteria for explainability-supporting ontologies
> – Dialogue management and natural language generation based on 
> semantics/ontologies
> – Visual explanations based on semantics/ontologies
> – Multi-modal explanations using semantics/ontologies
> – Interactive/incremental explanations based on semantics/ontologies
> – Ontological modeling of explanations and user profiles
> – Real-world applications and use cases of semantic/ontologies for 
> explanation generation
> – Approaches to human expertise/knowledge capture for use in 
> semantic/ontology based explanation generation
>
> ------------------------------------------------------------------------------------------------------------------------------ 
>
> Author Instructions
> ------------------------------------------------------------------------------------------------------------------------------ 
>
>
> We invite research papers and demonstration papers, either in long (16 
> pages) or short (8 pages) format.
>
> All papers have to be submitted electronically via EasyChair 
> (https://easychair.org/conferences/?conf=semex2019).
>
> All research submissions must be in English, and no longer than 16 
> pages for long papers, and 8 pages for short papers (including 
> references).
>
> Submissions must be in PDF, formatted in the style of the Springer 
> Publications format for Lecture Notes in Computer Science (LNCS). For 
> details on the LNCS style, see Springer’s Author Instructions: 
> http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
>
> Accepted papers will be published as CEUR workshop proceedings. At 
> least one author of each accepted paper must register for the workshop 
> and present the paper there.
>
> ------------------------------------------------------------------------------------------------------------------------------ 
>
> Workshop Organizers
> ------------------------------------------------------------------------------------------------------------------------------ 
>
>
> – Philipp Cimiano – Bielefeld University
> – Basil Ell – Bielefeld University, Oslo University
> – Agnieszka Lawrynowicz – Poznan University of Technology
> – Laura Moss – University of Glasgow
> – Axel-Cyrille Ngonga Ngomo – Paderborn University
>
> If you any question do not hesitate to contact us.
>
> Basil Ell on behalf of the SEMEX2019 chairs
>
-- 

Dr. Basil Ell
AG Semantic Computing
Bielefeld University
Bielefeld, Germany
CITEC, 2.311
+49 521 106 2951

Received on Tuesday, 18 June 2019 15:09:18 UTC