CFP: SEALS Evaluation Campaign 2012

[Apologies for cross-postings]
Call for Participation
SEALS Evaluation Campaign 2012

2nd International Evaluation Campaign for Semantic Technologies
organised by the
Semantic Evaluation At Large Scale (SEALS) Initiative

Following the success of the first campaign in 2010 (see the SEALS Whitepaper for more details -, we are pleased to announce the second International Evaluation Campaign for Semantic Technologies which will be conducted in Spring 2012. This campaign is organised by the Semantic Evaluation At Large Scale (SEALS) Project. 

We cordially invite you to participate in the this campaign in one or more of the five core areas shown below. Participation is open to anyone who is interesting in benchmarking a semantic technology tool. Detailed information regarding each area's campaign together with terms and conditions and general information about SEALS can be found on the SEALS Portal at

The Campaign
The SEALS Evaluation Campaign is open to all and will focus on benchmarking five core technology areas on the basis of a number of criteria such as Interoperability, Scalability, Usability, Conformance to Standards, and Efficiency. Each area's campaign will be largely automated and executed on the SEALS Platform thus reducing the overhead normally associated with such evaluations. 

Why get involved?
Broadly speaking, the benefits are threefold. Firstly, participation in the evaluation campaigns provides you with a respected and reliable means of benchmarking your semantic technologies. It provides an independent mechanism for demonstrating your tool's abilities and performance to potential adopters / customers.

Secondly, since you will have perpetual, free-of-charge access to the SEALS Platform, it gives you the highly valuable benefit of being able to regularly (and confidentially) assess the strengths and weaknesses of your tool relative to your competitors as an integral part of the development cycle.

Thirdly, your participation benefits the wider community since the evaluation campaign results will be used to create 'roadmaps' to assist adopters new to the field to determine which technologies are best suited to their needs thus improving general semantic technology market penetration.

How to get involved
Joining the SEALS Community is easy and poses no obligations. Indeed, by being a member of the community you receive the latest information about the evaluation campaign including details of newly published data sets, tips and advice on how to get the most out of your participation and the availability of results and analyses. Join now by going to

Timeline for the campaign
now	Registration
now 	Data, documentation available
now	Participants upload tool(s)
March - April 2012 	Evaluation executed (by SEALS)
April 2012 - May 2012	Results analysis (by SEALS)
June 2012	ESWC workshop (

The technology areas

Ontology Engineering Tools
Addresses the ontology management capabilities of semantic technologies in terms of their ontology language conformance, interoperability and scalability. The main tools targeted are ontology engineering tools and ontology management frameworks and APIs; nevertheless, the evaluation is open to any other type of semantic technology.

Ontology Storage and Reasoning Tools
Assesses a reasoner's performance in various scenarios resembling real-world applications. In particular, their effectiveness (comparison with pre-established 'golden standards'), interoperability (compliance with standards) and scalability are evaluated with ontologies of varying size and complexity.

Ontology Matching Tools
Builds on previous matching evaluation initiatives (OAEI campaigns) and integrates the following evaluation criteria: (a) conformance with expected results (precision, recall and generalizations); (b) performance in terms of memory consumption and execution time; (c) interoperability, measuring the conformance with standard such as RDF/OWL; and (d) measuring the coherence of the generated alignments.

Semantic Search Tools
Evaluated according to a number of different criteria including query expressiveness (means by which queries are formulated within the tool) and scalability. Given the interactive nature of semantic search tools, a core interest in this evaluation is the usability of a particular tool (effectiveness, efficiency, satisfaction).

Semantic Web Services
Focuses on activities such as discovery, ranking and selection. In the context of SEALS, we view a SWS tool as a collection of components (platform services) of the Semantic Execution Environment Reference Architecture (SEE-RA). Therefore, we require that SWS tools implement one or more SEE APIs in order to be evaluated.

Details of each area's evaluation scenarios and methodology can be found at:

The SEALS Project is developing a reference infrastructure known as the SEALS Platform to facilitate the formal evaluation of semantic technologies. This allows both large-scale evaluation campaigns to be run (such as the one described in this communication) and ad-hoc evaluations by individuals or organisations. 

Find out more
More information about SEALS and the evaluation campaign can be found from the SEALS portal

If you would like to contact us directly:
SEALS Coordinator: Asuncion Gomez-Perez (
Evaluation Campaign Coordinator: Fabio Ciravegna (

Carmen Brenner Bakk.-techn.
STI Innsbruck
Semantic Technology Institute Innsbruck
ICT Technologiepark
Technikerstr. 21a
6020 Innsbruck, Austria


Received on Wednesday, 15 February 2012 13:25:28 UTC