W3C home > Mailing lists > Public > public-rdf-dawg@w3.org > October to December 2006

Re: Prototype SPARQL engine based on Datalog and relation of SPARQL to the Rules Layer

From: Bijan Parsia <bparsia@cs.man.ac.uk>
Date: Mon, 4 Dec 2006 10:49:16 +0000
Message-Id: <3B705993-7212-44D9-8654-920A422C5E06@cs.man.ac.uk>
Cc: Fred Zemke <fred.zemke@oracle.com>, public-rdf-dawg-comments@w3.org, public-rdf-dawg@w3.org
To: axel@polleres.net

Background view: I think this is great stuff, but I suspect the group  
is swamped enough without trying to take on this additional chunk of  
work. It does suggest that a SPARQL/Next, or SPARQL/Extensions is  
worth a continuance.

I hate making this argument, because, esp. when a group has gone this  
long, there is little energy or will to do the "next" bit. And "easy  
wins" aren't necessarily so easy to get to spec, as I have to remind  
myself over and over again.

On Dec 4, 2006, at 8:39 AM, Axel Polleres wrote:

> Fred Zemke wrote:
>> Axel Polleres wrote:
>>> * I'd like to suggest to the working group some straightforward  
>>> extensions of SPARQL such as adding the set difference operator  
>>> MINUS, and allowing nesting of ASK queries in FILTER expressions  
>>> which come basically for free in the approach.
>>> * Finally, I discuss an extension towards recursion by allowing  
>>> bNode-free-CONSTRUCT queries as part of the query dataset, which  
>>> may be viewed as a light-weight, recursive rule language on top  
>>> of of RDF.

I think that'd be valuable, but seems baliwickwise, more of a RIF  
thing (though I don't exactly see how they could do it).

>> I think the ASK and CONSTRUCT ideas are very natural. I proposed  
>> them when I first took a look at SPARQL,
>> though my starting point was experience within SQL.
> yes, I mention explicitly that these are "easy wins" in my opinion.
>> SQL experience shows it is useful to be able to write subqueries.
>> In SPARQL, two natural places to put subqueries are ASK inside of  
>> and CONSTRUCT inside of FROM.
>> However, I don't follow you when you call CONSTRUCT inside of FROM
>> "recursive".  I don't see a way, for example, to construct triples  
>> with a verb
>> "is_ancestor_of" from a graph containing the verb "is_parent_of",
>> unless you have a priori information about the maximum number of
>> generations in the graph.
> I am not talking about nesting CONSTRUCT as you do (which by the  
> way is another interesting idea, but, as you correctly point out,  
> does not involve recursion. What I mean here is that CONSTRUCT  
> queries should be allowed *as part of the dataset*.
> I.e., the dataset is a set of RDF graphs *plus* views which define  
> implicit triples within these graphs.

People are sort of doing this at the protocol level.

> (in the spirit of views in SQL). Since these views can recursively  
> refer to the same dataset in the FROM clause, you have an  
> implicitly recursive view definition.
> clearly, the CONSTRUCT as part of the dataset is recursively
> referring to the same dataset, so the semantics should be the  
> transitive closure in my opinion.

That does seem reasonable and natural, if a bit tricky syntactically.  
If I may make a point I've generally argued *against*....there is  
nothing in SPARQL that forbids you from constructing graphs to be  
queried in this way.

Of course, that's not satisfactory in a number of ways. OTOH, do you  
want to have *inline* recursively queried CONSTRUCTs? I.e., a change  
to the syntax? If not, I think that defining a little rules language  
using SPARQL and RDF is great, but perhaps doesn't need to be *in*  

The pieces are sort of obvioius...you want a document format that can  
include both RDF data and construct queries. You could use Turtle for  
the RDF and regular SPARQL for the queries. The queries could be  
restricted to be CONSTRUCTs and could implicitly query (and add to)  
the current document.

I think RIF can do such a thing. Doesn't seem to step on anyone's  
toes. More complex scenarios are possible where the constructs are  
pulling in from other documents and "dumping" into the current  
context. Sounds like good fun.

> Advantages of allowing such implicit definition
> of metadata within RDF are e.g. explained in RIF use case 10, see
> http://www.w3.org/TR/rif-ucr/ 
> #Publishing_Rules_for_Interlinked_Metadata
> and what I wanted to achieve was using SPARQL CONSTRUCT as one  
> possible syntax for this.
> Hope this clarifies matters.
> Note I suggest here to only allow bNode-free CONSTRUCTs,
> within graphs since otherwise you won't reach a fixpoint
> when evaluating this query!

Is this necessarily true? I mean, it's definitely the case that if  
you are naive you'll run into trouble, but that seems surmountable.  
For example, you could require the constructed triples in any round  
of evaluation produce a non-equivalent graph. Is there a case where  
something like this wouldn't ensure termination in the RDF case?

(You have to either go with BNodes as existentials and use  
equivalence/minimization, or you have to be very strict in the  
distinction between source nodes and construct generated nodes.)

Received on Monday, 4 December 2006 10:49:35 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:00:52 UTC