- From: Ghalem Ouadjed (EOWEO) <gouadjed@eoweo.com>
- Date: Sat, 23 Jun 2012 10:37:26 +0200
- To: adasal <adam.saltiel@gmail.com>
- CC: "semantic-web@w3.org" <semantic-web@w3.org>
Le 22/06/2012 17:40, adasal a écrit : > Well it's a very complex subject isn't it. > I have never done reasoner optimization but e.g. Allegro claim their > reasoner is faster over a certain data set than some other X. And I > think theirs and others have reasoners which are plug in. So the first > step is understanding the significance of the underlying data store. > Then there is the logic the reasoner supports. Some are opptimised for > different branches. But may do less well than X with some other logic set. > I think choice of logic comes before choice of reasoner though? > So now we have the store, the logic, the reasoner and add in the > implementation language and the query language. > If it is a complex store (Open RDF?) we may also be looking at its > component modules and their implementation. > Don't forget versions. > Now what do you want to know? > (not just to be clear that I would be able to answer. But then think > about it very few people would given above. ) > > Adam Hi Adam, yes the first thought when the users talk about their prob using reasoner on their data concerns the data. Actually they all use Pellet and i can add that their data is not so clean because they are produced from other semi structured data (xml like). As for today the users have a conservative process which consists on preserving their initial format and produce some rdf/xml files in a way to enhance the conclusion. But these enhancements are not stable. For example the conclusions they get are " not always equivalent". if we consider that the data is 80% responsible what would the format the most interesting ? My thought is that turtle like is interesting as for me the reasoners we use to meet are Prolog based (thought?) and then N3 + rules could provide better results...?.. Ghalem
Received on Saturday, 23 June 2012 08:37:58 UTC