OWL Interoperability Benchmarking - Call for participation

(Apologies for multiple postings)


                 OWL Interoperability Benchmarking

                      Call for participation

http://knowledgeweb.semanticweb.org/benchmarking_interoperability/owl/

Benchmarking motivation
-----------------------

The technology that supports the Semantic Web presents a great diversity
and, while all these tools use different kinds of ontologies, not all
share a common knowledge representation model, which causes a problem
when these tools try to interoperate.

OWL is the language recommended by the World Wide Web Consortium for
defining ontologies and it currently seems the right choice to use as a
language for interchanging them. But current interoperability between
Semantic Web tools using OWL is unknown, and evaluating up to what
extent one tool is able of interchanging ontologies with others is quite
difficult as there are no means available for easily performing it.

An ideal scenario would be one in which tools interchange ontologies
with a minimal loss or addition of knowledge. However, the
interoperability of current tools is far from such a scenario. A
solution to improve the interoperability is to perform a benchmarking of
the tools.

Benchmarking is a process for obtaining a continuous improvement in a
set of tools by systematically evaluating them and comparing their
performance with that of the tools considered to be the best. This
allows to extract the best practices used by the best tools and to
obtain a superior performance in the tools.

The goals of the benchmarking are:

  * To improve the interoperability of Semantic Web technology using OWL
as interchange language.
  * To identify the fragment of the knowledge models shared by the tools
that they can use to interoperate.

Previously, we performed the RDF(S) interoperability benchmarking where
we assessed the interoperability of tools using RDF(S) as interchange
language [1]. This time we consider OWL as interchange language instead
of RDF(S) and we aim for a fully automatic execution of the experiments.

The benchmarking will be carried out by performing interoperability
experiments according to a common experimentation framework; then, their
results will be collected, analysed and written in a public report,
along with the best practices and tool improvement recommendations found.

Experiments to be performed
---------------------------

The experiment to be performed consists on measuring the
interoperability of the tools participating in the benchmarking by
interchanging ontologies from one tool to another. From these
measurements, we will extract the current interoperability between the
tools, the causes of problems, and improvement recommendations.

In this benchmarking activity we consider interoperability between tools
using an interchange language. To interchange ontologies from one tool
to another, they must first be exported from the origin tool to a file,
which must then be imported into the destination tool. As any ontology
exported by a tool is usually represented in the RDF/XML syntax, we will
use this format for the interchange.

The execution of the experiments will be fully automatic. To that end,
the IBSE tool has been developed. To be able to automatically perform
the experiments, for each tool a method must be implemented as shown in
the IBSE web page [3].

The ontologies that will be interchanged between all the tools are those
of the OWL Import Benchmark Suite.

Timeline
--------

The timeline for the benchmarking is the following:

30th June 2007    Implementation of the interfaces for the tools
15th July 2007    Execution of the experiments
20th August 2007  Analysis of the results

Participating in the benchmarking
---------------------------------

Every organisation is welcome to participate in OWL interoperability
benchmarking. Organizations participating in the benchmarking are
expected to implement the required IBSE method for their tool (an easy
task) and to analyse the results of their tool.

If you want to participate in the benchmarking or have some further
question about it, please contact Raúl García Castro in the following
email: rgarcia ( a ) fi . upm . es .

Organisers:
-----------
  - Raul Garcia Castro
  - Asuncion Gomez Perez

This benchmarking activity is supported by the Knowledge Web Network of
Excellence (http://knowledgeweb.semanticweb.org/).

Further information:
--------------------
[1] RDF(S) Interoperability Benchmarking:
http://knowledgeweb.semanticweb.org/benchmarking_interoperability/rdfs/
[2] OWL Interoperability Benchmarking:
http://knowledgeweb.semanticweb.org/benchmarking_interoperability/owl/
[3] IBSE tool:
http://knowledgeweb.semanticweb.org/benchmarking_interoperability/ibse/

Best regards,

-- 

Raúl García Castro
http://delicias.dia.fi.upm.es/~rgarcia/

Ontology Engineering Group (http://www.oeg-upm.net/)
Universidad Politécnica de Madrid
Campus de Montegancedo, s/n - Boadilla del Monte - 28660 Madrid
Phone: +34 91 336 36 70 - Fax: +34 91 352 48 19

Received on Wednesday, 18 April 2007 18:55:26 UTC