- From: <lejla@sti2.org>
- Date: Fri, 25 Jun 2010 14:34:11 +0200
- To: <semantic-web@w3.org>
- Message-ID: <002d01cb1462$b70834f0$25189ed0$@org>
+++ Invitation to participate in the SEALS evaluation campaign for ontology engineering environments +++ The SEALS Yardsticks For Ontology Management is an evaluation campaign that comprises a set of evaluations defined with the goal of evaluating the ontology management capabilities of semantic technology tools in order to answer questions as the following: * Tool A is able of managing OWL DL ontologies but, up to what extent can it manage OWL Full ontologies? * I am using an OWL Full ontology in Tool B and I want to use it in Tool C, which only supports OWL DL. Can I make it with a minimal loss of information? * Somebody recommended me to use Tool D, but I need to manage very big ontologies. Can this tool make it efficiently? If not, which one can? The main tools targeted for these evaluations are ontology engineering tools and ontology management frameworks and APIs. The evaluations will cover three different characteristics : conformance, interoperability and scalability * Conformance: We will evaluate tool conformance with regards to the RDF(S) and OWL ontology languages with the goal of analysing up to what extent the different ontology constructors are supported by tools. To this end, we will use four different test suites to cover the RDF(S), OWL Lite, OWL DL, and OWL Full languages. * Interoperability: We will evaluate the interoperability of tools when interchanging ontologies using an interchange language with the goal of knowing the effects of interchanging ontologies between tools. As in the conformance evaluation, we will cover RDF(S), OWL Lite, OWL DL, and OWL Full as interchange languages. * Scalability: We will evaluate the scalability of tools when managing ontologies of increasing size with the goal of checking up to what extent tools are able of dealing with big ontologies while maintaining their efficiency. In the scalability evaluation we will use large established ontologies as well as synthetically generated ontologies of increasing size. The evaluation campaign will take place during the summer of 2010. Participation is open to developers interested in evaluating their tool or to anyone who wants to evaluate a certain tool. Participants are just expected to collaborate in the connection of their tool with the SEALS Platform, which will be the infrastructure that will run all the evaluations automatically. Besides checking their results and comparing with others, once the tool is connected to the SEALS Platform participants will also be able to run the evaluations on their own with these and future test data. If you want to participate, simply register your tool in the SEALS Portal ( <http://www.seals-project.eu/> http://www.seals-project.eu) and stay tuned to the evaluation campaign web page ( <http://www.seals-project.eu/seals-evaluation-campaigns/ontology-engineering -tools> http://www.seals-project.eu/seals-evaluation-campaigns/ontology-engineering- tools), where you can find detailed descriptions of the evaluations that we will perform and the latest information and results of the evaluation campaign. This evaluation campaign is taking place inside the SEALS project. Go to the SEALS web page ( <http://www.seals-project.eu/> http://www.seals-project.eu) and check the other evaluation campaigns that are taking place this year. If you have any question or comment about the evaluation campaign, please contact us. We count on your participation!
Received on Friday, 25 June 2010 12:34:46 UTC