- From: François Scharffe <francois.scharffe@lirmm.fr>
- Date: Wed, 09 May 2012 15:15:29 +0200
- To: public-lod@w3.org
Dear Samur, We are preparing the OAEI benchmark for this year's edition. It would be interesting to know your requirements in terms of size, domain, benchmark structure (eg pairs of datasets or across multiple datasets) or any other evaluation parameter you would like to see. Generaly speaking we have identified the need for a large curated testbed, as the last editions use links that were semi-automatically generated and then manually checked for correctness, but without garanty on completeness. Problem is that it requires a consequent amount of human intervention. We will also continue to use the ISLab instance matching benchmark tool [1] that generates on demand benchmarks by performing transformations on a source dataset. Regards, François [1] http://islab.dico.unimi.it/iimb/ Le 09/05/12 14:41, Samur Araujo a écrit : > Dear list, are there any reference alignment for evaluating Instance > Matching Methods over Linked Data? > > I am aware of the OAEI benchmark, are there any other ? > > Thank you, > Samur > > >
Received on Wednesday, 9 May 2012 13:16:06 UTC