Last CFP : Semantic Reasoning Evaluation Challenge (SemREC) at ISWC 2021

Dear all,

TL;DR
************************
We are organizing a challenge centered around reasoning. We invite you to
make submissions that can be in one or more of the following categories -
1) An ontology developed for a real-world application but proved to be a
challenge for the existing reasoners.
2) A traditional description logic reasoner that was developed or made
improvements to in the last few years.
3) A neuro-symbolic reasoner that approximates entailments or predicts
missing axioms.

*Extended Deadline - 22 July 2021, 23:59 (AOE) *

If you have any questions, please contact me.

Further details are on the website: https://semrec.github.io/.

************************
Longer version
*************************

Despite the development of several ontology reasoning optimizations, the
traditional methods either do not scale well or only cover a subset of OWL
2 language constructs. As an alternative, neuro-symbolic approaches are
gaining significant attention. However, the existing methods still can not
deal with very expressive ontology languages. To find and improve these
performance bottlenecks of the reasoners, we ideally need several
real-world ontologies that span the broad spectrum in terms of their size
and expressivity. However, that is often not the case. One of the potential
reasons for the ontology developers to not build ontologies that vary in
terms of size and expressivity is the performance bottleneck of the
reasoners. This challenge includes three tasks that aim to deal with this
chicken and egg problem.

Task-1 - Submit a real-world ontology that is a challenge in terms of the
reasoning time or memory consumed during reasoning. We will be evaluating
the submitted ontologies based on the time and the memory consumed for a
reasoning task, such as classification.

Task-2 - Submit a description logic reasoner that makes use of traditional
techniques such as tableau algorithms and saturation rules. We will
evaluate the performance and the scalability of the submitted systems on
the datasets based on the time taken and memory consumed on the ontology
classification task. This will provide an insight into the progress in the
development of reasoners since the last reasoner evaluation challenge (ORE
2015).

Task-3 - Submit an ontology/RDFS reasoner that makes use of neuro-symbolic
techniques for reasoning and optimization. We will be evaluating two types
of neuro-symbolic systems: (a) that approximate the entailment reasoning
for addressing the time complexity problem, or (b) predicting missing and
plausible axioms for completion. We will evaluate the submitted systems on
the test datasets based on the time taken, memory consumed, precision and
recall.

*Submission Details*

Participants are requested to make a manuscript submission describing their
entry.

For Task 1, we expect a detailed description of the ontology along with the
analysis of the reasoning performance, the workarounds, if any, that were
used to make the ontology less challenging (for example, dropping of a few
axioms, redesigning the ontology, etc.), and the (potential) applications
in which the ontology could be used.

For Tasks 2 and 3, we expect a detailed description of the system,
including evaluating the system on the provided datasets.


   - For Task 2, having a link to the code repository in the paper is
   sufficient. Please make sure that there are clear instructions to build and
   run the code. In addition to that and in cases where it is not possible to
   share the code, it would be very helpful to us if the binary/executable is
   also made available to us (as supplementary material or as part of the code
   repository). We plan to evaluate the submitted systems on a Linux-based CPU
   server.
   - For Task 3, we provide an eval.py
   <https://github.com/semrec/semrec.github.io> file for the subsumption
   task. This is provided only to give an idea of the kind of submission we
   expect from the participants. Participants are requested to make the
   changes mentioned in the file to evaluate it on their embeddings for the
   supported reasoning task (eg. class subsumption, class membership, etc). We
   would require the class embeddings of your model along with a readme on the
   changes made on the evaluation file and how to use it. We plan to evaluate
   the submitted systems on a Linux-based GPU server.

The submissions can be either in the form of short papers of length 5 pages
or long papers of length 10-12 pages. All the submissions must be in
English and follow the 1-column CEUR-ART style (overleaf template)
<https://www.overleaf.com/latex/templates/ceurart-template-for-submissions-to-semrec-challenge/qktzxsbyhsdp>.
The proceedings will be published as a volume of CEUR-WS
<http://ceur-ws.org/>. Submissions should be made in the form of a pdf
document on EasyChair <https://easychair.org/conferences/?conf=semrec2021>.

Website: https://semrec.github.io/


*Organizers*

Gunjan Singh, IIIT-Delhi, India.
Raghava Mutharaju, IIIT-Delhi, India.
Pavan Kapanipathi, IBM T.J. Watson Research Center, USA.

Best regards,
Gunjan

Received on Saturday, 10 July 2021 09:59:16 UTC