Invitation to participate in the SEALS evaluation campaign

+++ Invitation to participate in the SEALS evaluation campaign for ontology
engineering environments +++ 

The SEALS Yardsticks For Ontology Management is an evaluation campaign that
comprises a set of evaluations defined with the goal of evaluating the
ontology management capabilities of semantic technology tools in order to
answer questions as the following: 

*	Tool A is able of managing OWL DL ontologies but, up to what extent
can it manage OWL Full ontologies? 
*	I am using an OWL Full ontology in Tool B and I want to use it in
Tool C, which only supports OWL DL. Can I make it with a minimal loss of
*	Somebody recommended me to use Tool D, but I need to manage very big
ontologies. Can this tool make it efficiently? If not, which one can? 

The main tools targeted for these evaluations are ontology engineering tools
and ontology management frameworks and APIs. The evaluations will cover
three different characteristics : conformance, interoperability and

*	Conformance: We will evaluate tool conformance with regards to the
RDF(S) and OWL ontology languages with the goal of analysing up to what
extent the different ontology constructors are supported by tools. To this
end, we will use four different test suites to cover the RDF(S), OWL Lite,
OWL DL, and OWL Full languages. 
*	Interoperability: We will evaluate the interoperability of tools
when interchanging ontologies using an interchange language with the goal of
knowing the effects of interchanging ontologies between tools. As in the
conformance evaluation, we will cover RDF(S), OWL Lite, OWL DL, and OWL Full
as interchange languages. 
*	Scalability: We will evaluate the scalability of tools when managing
ontologies of increasing size with the goal of checking up to what extent
tools are able of dealing with big ontologies while maintaining their
efficiency. In the scalability evaluation we will use large established
ontologies as well as synthetically generated ontologies of increasing size.

The evaluation campaign will take place during the summer of 2010. 

Participation is open to developers interested in evaluating their tool or
to anyone who wants to evaluate a certain tool. 

Participants are just expected to collaborate in the connection of their
tool with the SEALS Platform, which will be the infrastructure that will run
all the evaluations automatically. Besides checking their results and
comparing with others, once the tool is connected to the SEALS Platform
participants will also be able to run the evaluations on their own with
these and future test data. 

If you want to participate, simply register your tool in the SEALS Portal (
<> and stay tuned
to the evaluation campaign web page (
tools), where you can find detailed descriptions of the evaluations that we
will perform and the latest information and results of the evaluation

This evaluation campaign is taking place inside the SEALS project. Go to the
SEALS web page ( <>
and check the other evaluation campaigns that are taking place this year. 

If you have any question or comment about the evaluation campaign, please
contact us. 

We count on your participation!  

Received on Friday, 25 June 2010 12:34:46 UTC