1st International Evaluation Campaign for Semantic Technologies

1st International Evaluation Campaign for Semantic Technologies

------------------------
organised by the
Semantic Evaluation At Large Scale (SEALS) Initiative
http://www.seals-project.eu
------------------------

We are pleased to announce the first International Evaluation Campaign
for Semantic Technologies which will take place during Summer 2010.
This campaign is organised by the Semantic Evaluation At Large Scale
(SEALS) Project.

We cordially invite you to participate in the this campaign in one or
more of the five core areas shown below. Participation is open to
anyone who is interesting in benchmarking a semantic technology tool.
Detailed information regarding each area's campaign together with
terms and conditions and general information about SEALS can be found
on the SEALS Portal at http://www.seals-project.eu.

The Campaign

The 1st SEALS Evaluation Campaign is open to all and will focus on
benchmarking five core technology areas on the basis of a number of
criteria such as Interoperability, Scalability, Usability, Conformance
to Standards, and Efficiency. Each area's campaign will be largely
automated and executed on the SEALS Platform thus reducing the
overhead normally associated with such evaluations.


Why get involved?

Broadly speaking, the benefits are threefold. Firstly, participation
in the evaluation campaigns provides you with a respected and reliable
means of benchmarking your semantic technologies. It provides an
independent mechanism for demonstrating your tool's abilities and
performance to potential adopters / customers.

Secondly, since you will have perpetual, free-of-charge access to the
SEALS Platform, it gives you the highly valuable benefit of being able
to regularly (and confidentially) assess the strengths and weaknesses
of your tool relative to your competitors as an integral part of the
development cycle.

Thirdly, your participation benefits the wider community since the
evaluation campaign results will be used to create 'roadmaps' to
assist adopters new to the field to determine which technologies are
best suited to their needs thus improving general semantic technology
market penetration.


How to get involved

Joining the SEALS Community is easy and poses no obligations. Indeed,
by being a member of the community you receive the latest information
about the evaluation campaign including details of newly published
data sets, tips and advice on how to get the most out of your
participation and the availability of results and analyses. Join now
by going to:

http://www.seals-project.eu/join-the-community


Timeline for the campaign:

June	        Registration opens
June	        Data, documentation available
July     	Participants upload tool(s)
August	Evaluation executed
September	Results analysis (by SEALS)
November	ISWC workshop (tbc)


The technology areas
-------------------------------

Ontology Engineering Tools

Addresses the ontology management capabilities of semantic
technologies in terms of their ontology language conformance,
interoperability and scalability. The main tools targeted are ontology
engineering tools and ontology management frameworks and APIs;
nevertheless, the evaluation is open to any other type of semantic
technology.

Ontology Storage and Reasoning Tools

Assesses a reasoner's performance in various scenarios resembling
real-world applications. In particular, their effectiveness
(comparison with pre-established 'golden standards'), interoperability
(compliance with standards) and scalability are evaluated with
ontologies of varying size and complexity.

Ontology Matching Tools

Builds on previous matching evaluation initiatives (OAEI campaigns)
and integrates the following evaluation criteria: (a) conformance with
expected results (precision, recall and generalizations); (b)
performance in terms of memory consumption and execution time; (c)
interoperability, measuring the conformance with standard such as
RDF/OWL; and (d) measuring the coherence of the generated alignments.

Semantic Search Tools

Evaluated according to a number of different criteria including query
expressiveness (means by which queries are formulated within the tool)
and scalability. Given the interactive nature of semantic search
tools, a core interest in this evaluation is the usability of a
particular tool (effectiveness, effciency, satisfaction).

Semantic Web Services

Focuses on activities such as discovery, ranking and selection. In the
context of SEALS, we view a SWS tool as a collection of components
(platform services) of the Semantic Execution Environment Reference
Architecture (SEE-RA). Therefore, we require that SWS tools implement
one or more SEE APIs in order to be evaluated.

Details of each area's evaluation scenarios and methodology can be found at:

http://www.seals-project.eu/seals-evaluation-campaigns



About SEALS

The SEALS Project is developing a reference infrastructure known as
the SEALS Platform to facilitate the formal evaluation of semantic
technologies. This allows both large-scale evaluation campaigns to be
run (such as the one described in this communication) and ad-hoc
evaluations by individuals or organizations.

Find out more

More information about SEALS and the evaluation campaign can be found
from the SEALS portal http://www.seals-project.eu

If you would like to contact us directly:

SEALS Coordinator: Asuncion Gomez-Perez (asun@fi.upm.es)

Evaluation Campaign Coordinator: Fabio Ciravegna
(f.ciravegna@dcs.shef.ac.uk) 

Received on Monday, 7 June 2010 17:54:07 UTC