CfP - ONTOLOGY ALIGNMENT EVALUATION INITIATIVE (OAEI) 2015

CALL FOR PARTICIPATION - ONTOLOGY ALIGNMENT EVALUATION INITIATIVE (OAEI)
2015
--Apologies for cross-posting--

Since 2004, OAEI has been supporting the  extensive and rigorous evaluation
of ontology matching
and instance matching techniques.

In 2015, OAEI will have the following tracks (
http://oaei.ontologymatching.org/2015/):

    Benchmark

    Anatomy

    Multifarm

    Interactive Matching (New datasets)

    Large Biomedical Ontologies

    Instance Matching (New datasets)

    Ontology Alignment for Query Answering


NEW DATASETS:

- Interactive Matching. The Interactive Matching track will include the
Conference, Anatomy and LargeBio datasets. The addition of the large
ontologies to this track represents new challenges in user interaction
optimizations. Moreover, we will also simulate domain experts with variable
error rate which reflects a more realistic scenario where a (simulated)
user does not necessarily provide always a correct answer. In these
scenarios asking a large number of questions to the user may also have a
negative impact.

http://oaei.ontologymatching.org/2015/interactive/index.html


- Instance Matching. The Instance Matching Track aims at evaluating the
performance of matching tools when the goal is to detect the degree of
similarity between pairs of items/instances expressed in the form of OWL
Aboxes. The track is organized in five independent tasks. To participate to
the Instance Matching Track, submit results related to one, more, or even
all the expected tasks. Each task is articulated in two tests with
different scales (i.e., number of instances to match): i) Sandbox (small
scale). It contains two datasets called source and target as well as the
set of expected mappings (i.e., reference alignment). ii) Mainbox (medium
scale). It contains two datasets called source and target. This test is
blind, meaning that the reference alignment is not given to the
participants. In both tests, the goal is to discover the matching pairs
(i.e., mappings) among the instances in the source dataset and the
instances in the target dataset.

http://islab.di.unimi.it/im_oaei_2015/index.html


IMPORTANT DATES

    July 10th: datasets available for presceening.

    July 31st: datasets are frozen.

    July 31st to August 31st: participants can send their wrapped system
for test runs (note thate in the OAEI 2015 edition we have updated the
SEALS client and its tutorial).

    August 31st: participants send final versions of their wrapped tools.

    September 28th: evaluation is executed and results are analyzed.

    October 5th: final paper due.

    October 12th: Ontology matching workshop.

    November 16th: Final version of system papers due (sharp).

-- 
Ernesto Jiménez-Ruiz
Research Assistant
Department of Computer Science
University of Oxford
Wolfson Building, Parks Road, Oxford OX1 3QD, UK

http://krono.act.uji.es/people/Ernesto
http://www.cs.ox.ac.uk/people/ernesto.jimenez-ruiz/

Received on Wednesday, 5 August 2015 14:04:14 UTC