W3C home > Mailing lists > Public > semantic-web@w3.org > March 2005

Re: fast inferencing with jena and "?"

From: Leo Sauermann <leo@gnowsis.com>
Date: Tue, 22 Mar 2005 17:58:49 +0100
Message-ID: <42404EC9.8050103@gnowsis.com>
To: Dave Reynolds <der@hplb.hpl.hp.com>
CC: semantic-web@w3.org

Hi Dave,

actually a colleague of me is doing it and it is a commercial project we 
do for a telecommunications company, so we can't publish the triples :-|

roughly, its about checking if two graph A, B are "near" to each other,
A,B describe resources and the resources are of Schema S
now what we do is complete A and B by using S and then doing some graph 
matching algorithm combined with property matching,
so we combine A with S and B with S and then use A(S) and B(S) to do the 

if type(A(S)) == type(B(S)) then "quite match"
and forallPropertiesOf( prop(A(S)) == prop(B(S))) then add "quite match"

so there are  a few find(spo) that fire into the graph which the graph 
does not like

we'll try the new Jena release and see what happens.


Es begab sich aber zu der Zeit 21.03.2005 12:16,  da Dave Reynolds schrieb:

> Hi Leo,
>> The problem with Jena is: the Model RDFS_MEM_TRANS_INF is too slow to do
>> simple inference (and it was the fastest we found in jena)
> Which version of Jena? There was a bug fix affecting TRANS between 2.1 
> and 2.2beta1 and a performance problem fixed between 2.2beta1 and 
> 2.2beta2.
>> It has 200ms performance of matching two small rdf instance models
>> against a RDF/S ontology model (180 classes). 
> What do you mean by "matching" a model against an RDFS model?
> If you can show us what you are doing (ideally a self-contained code 
> example) then we might be able to advise on optimizations. Though code 
> exchange is probably better done over on jena-dev or off list.
>> We did everything we could to make it faster, including prefetching all
>> classes, properties, trying out different Jena inferencers, etc.
> If you prefetched all classes and properties then there is presumably 
> no inference left. If the performance wasn't good enough in that set 
> up then you don't need faster inference you need a faster algorithm or 
> reduced API overheads. That would make it even more interesting to see 
> exactly what you are doing to figure where the performance problem is.
> Cheers,
> Dave
Received on Tuesday, 22 March 2005 17:02:02 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:44:52 UTC