- From: Kingsley Idehen <kidehen@openlinksw.com>
- Date: Wed, 26 Sep 2012 09:59:56 -0400
- To: public-rdf-wg@w3.org
- Message-ID: <50630A5C.5090603@openlinksw.com>
On 9/26/12 1:51 AM, Antoine Zimmermann wrote:
> There are triples stores that do reasoning, and what they contain are
> datasets. Unfortunately, in this debate, I've not heard from the folks
> who implement them.
> I'd like to see what, e.g., OWLIM, Virtuoso are doing with named
> graphs when you switch on inferences, but looking at the
> documentation, I don't find a clear answer.
We use a pragma in SPARQL. I've published many examples of its use over
the years [1][2].
Pragma example:
DEFINE input:inference
"http://dbpedia.org/resource/inference/rules/dbpedia#"
Meaning: conditionally apply this inference context (an ontology URI
mapped to a rule) as a context lenses to the eventual SPARQL solution.
Basically, use backward-chained inference to prepare the data to which
the SPARQL query will apply.
SPARQL Example:
## With Inference Context enabled
DEFINE input:inference
"http://dbpedia.org/resource/inference/rules/dbpedia#"
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX dbpedia-owl: <http://dbpedia.org/ontology/>
SELECT COUNT(*)
FROM <http://dbpedia.org>
WHERE {
?person <http://dbpedia.org/property/dateOfBirth>
"1967-08-21T00:00:00-04:00"^^xsd:dateTime
}
## Without Inference Context
## DEFINE input:inference
"http://dbpedia.org/resource/inference/rules/dbpedia#"
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX dbpedia-owl: <http://dbpedia.org/ontology/>
SELECT COUNT(*)
FROM <http://dbpedia.org>
WHERE {
?person <http://dbpedia.org/property/dateOfBirth>
"1967-08-21T00:00:00-04:00"^^xsd:dateTime
}
Links:
1. http://bit.ly/OEBP7N -- Virtuoso reasoning example using DBpedia instance
2. https://plus.google.com/s/OWL%20Reasoning%20Linked%20Data%20Virtuoso
-- various G+ posts.
Kingsley
>
> AZ
>
> Le 26/09/2012 00:16, Sandro Hawke a écrit :
>>
>> As we're talking about Dataset Semantics, I'm wondering who will
>> implement reasoners that use them. I wonder this for two reasons.
>>
>> 1. We need folks to implement a spec, in order for a spec to become a
>> W3C Recommendation [1]. If it doesn't get implemented, it gets stuck
>> at Candidate Recommendation. If it's too tied to the other specs, they
>> could all get stuck. (Fortunately, we can just label the dataset
>> semantics text "at risk" in the spec so we can remove it, if necessary,
>> and let the other specs proceed.)
>>
>> 2. Some folks might implement it mostly because they like to be feature
>> complete (eg the Jena team, historically) but maybe some other folks
>> will implement it because they want to use it for some application. I
>> suggest these people should perhaps be given the strongest weight in the
>> Dataset Semantics discussion, if they speak up. If the proposed
>> semantics solve their problem, they're much more likely to
>> implement-to-spec and be happy.
>>
>> For myself, at this point I'm 70% convinced that I can implement all the
>> dataset use cases I understand (the ones I enumerated in the Federated
>> Phonebook examples, plus SPARQL dump/restore) without any standard
>> dataset semantics beyond having a standard place for metadata (eg the
>> default graph in trig and the service description graph in SPARQL).
>>
>> -- Sandro
>>
>> [1] http://www.w3.org/2005/10/Process-20051014/tr#cfr
>>
>>
>
>
--
Regards,
Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Attachments
- application/pkcs7-signature attachment: S/MIME Cryptographic Signature
Received on Wednesday, 26 September 2012 14:00:19 UTC