2nd CFP - ISWC Semantic Web Challenge - SeMantic Answer Type and Relation Prediction (SMART2021)

SeMantic Answer Type and Relation Prediction Task (SMART 2021)

ISWC Semantic Web Challenge in conjunction with ISWC 2021
<https://iswc2021.semanticweb.org/>

*********************************************************************************

Challenge website: https://smart-task.github.io/2021/

Datasets: https://github.com/smart-task/smart-2021-dataset

Slack: https://smart-task-iswc.slack.com/
<https://join.slack.com/t/smart-task-iswc/shared_invite/zt-vqn3vmc3-qIDSDru_P7~E7OzzcwNAVQ>


Conference date and location: 24 - 28 October 2021 (Online - Virtual)

Submission deadline: October 4, 2021 (Extended)

*********************************************************************************

UPDATES:

- Test data for both tasks are released.

- The submission deadline is extended to October 4, 2021.

- We provide training data specific to the tasks for developing those
individual modules. Nevertheless, if you already have a KBQA system capable
of producing SPARQL queries from the text in an end-to-end manner, we also
provide scripts for converting those queries to the expected output format
of these tasks.

Brief Background

Knowledge Base Question Answering (KBQA) is a popular task in the field of
Natural Language Processing and Information Retrieval, in which the goal is
to answer a natural language question using the facts in a Knowledge Base.
KBQA can involve several subtasks such as entity linking, relation linking,
and answer type prediction. In the SMART 2021 Semantic Web Challenge, we
focus on two subtasks in KBQA.

Task Descriptions

This year, in the second iteration of the SMART challenge, we have two
independent tasks (for two KBs, DBpedia and Wikidata):

*Task 1 - Answer Type Prediction*: Given a question in natural language,
the task is to predict the type of the answer from a target ontology.

- Which languages were influenced by Perl? --> dbo:ProgrammingLanguage or
wd:Q9143

- How many employees does IBM have? --> number

*Task 2 - Relation Prediction*: Given a question in natural language, the
task is to predict the relations needed to extract the correct answer from
the KB.

- Who are the actors starring in movies directed by and starring William
Shatner? --> [dbo:starring, dbo:director] or [cast member (P161), director
(P57) ]

- What games can be played in schools founded by Fr. Orlando? -->
[dbo:sport, dbo:foundedBy] or [founded by (P112), sport (P641)]

Datasets

We have created four datasets, one per each task / KB;
SMART-2021-AT-DBpedia (41K train / 10K test), SMART-2021-AT-Wikidata (54K
train / 11K test), SMART-2021-RL-DBpedia (34K train / 8K test), and
SMART-2021-RL-Wikidata (30K train / 6K test). Participants can compete in
all or any combination of these four.

Submissions and publication

Participants can submit the output for the test data and a system paper.
The paper will be peer-reviewed and published in a CEUR volume similar to
last year: http://ceur-ws.org/Vol-2774/.

Please join the slack workspace and contact the organizers regarding any
inquiries related to the tasks, and datasets. Thank you and looking forward
to your submissions!

Organizers

Nandana Mihindukulasooriya, IBM Research AI

Mohnish Dubey, University of Bonn, Germany

Alfio Gliozzo, IBM Research AI

Jens Lehmann, University of Bonn, Germany

Axel-Cyrille Ngonga Ngomo, Paderborn University, Germany

Ricardo Usbeck, University of Hamburg, Germany

Gaetano Rossiello, IBM Research AI

Uttam Kumar, University of Bonn, Germany

Received on Wednesday, 15 September 2021 11:43:30 UTC