W3C home > Mailing lists > Public > www-archive@w3.org > September 2010

Re: First order logic and SPARQL

From: Bijan Parsia <bparsia@cs.man.ac.uk>
Date: Thu, 9 Sep 2010 19:38:53 +0100
Message-Id: <4042C996-CE26-488C-84C7-EC97BFC43C47@cs.man.ac.uk>
Cc: www-archive@w3.org
To: Pat Hayes <phayes@ihmc.us>
On 9 Sep 2010, at 17:05, Pat Hayes wrote:

> On Sep 9, 2010, at 6:44 AM, Bijan Parsia wrote:
>> On 8 Sep 2010, at 15:58, Pat Hayes wrote:
>> The semantics of SPARQL (by which I mean the set of triples  
>> <Dataset, Query, Answers> sanctioned by the specification wrt RDF  
>> datasets)
> Ah. I didn't mean that. I meant a model theory, providing truth  
> conditions relative to an interpretation drawn from a set of  
> possible interpretations.

First, see Axel's paper.

Second, next time just read "consequence" where we say "entailment".  
Or ask us to use "consequence".

>> is rather well specified (with some exceptions, e.g., DESCRIBE).  
>> In the spec, this set is described primarily in terms of an  
>> algebra which is pretty much the relational algebra. Equivalently,  
>> Axel has shown that it can be described in terms of  entailment in  
>> (nonrecursive, I would think for simple entailment and SPARQL1.0)  
>> Datalog with stratified negation (well, I guess it's automatically  
>> stratified :)). If you conceptualize the triple as an axiom schema  
>> such that Dataset |- Grounding(Query, Answer) (where Answer is  
>> drawn from the set of answers), then we have a clear consequence  
>> relation (or something reasonably modeled by a consequence  
>> relation). This consequence relation is clearly non-monotonic.
> That is, yes. But monotonicity is traditionally defined in terms of  
> entailment, which refers to truth.

Actually, it's typically defined in terms of *consequences*. E.g.,
"A non-monotonic logic is a formal logic whose consequence relation  
is not monotonic. Most studied formal logics have a monotonic  
consequence relation, meaning that adding a formula to a theory never  
produces a reduction of its set of consequences. Intuitively,  
monotonicity indicates that learning a new piece of knowledge cannot  
reduce the set of what is known. "

"Consider the formal properties of a consequence relation from an  
abstract point of view. Let

  be any relation between sets of premises and single sentences. The  
following properties are all satisfied by the consequence relation ⊨  
of FOL:"

Just consider Default logic.

Or read Makinson.

In many cases these consequent relations can be given a model  
theoretic account. Like SPARQL. As Axel has done.

But a consequence (or entailment) relation is never purely  
semantically characterized. After all, we have to speak of the syntax  
of the assertions and the syntax of the consequences. For many  
standard logics, these coincide. But, quite obviously, they don't  
have to.

>> I would go further and say that this is a bog-standard and very  
>> useful way to conceptualize and reason about the situation (e.g.,  
>> complexity results fall out very naturally as do implementation  
>> techniques). There are other ways of conceptualizing it, but they  
>> need to capture the same feature, to wit,  there are cases where  
>> when you extend a Dataset with an additional triple, you lose "an  
>> answer", i.e. a consequence.
>> It's useful, even if you don't find it congenial, to talk this way  
>> even if we didn't have a plethora of model theoretic techniques to  
>> cope with it. Default logic was understandable without a model  
>> theoretical account
> Actually I disagree. The general idea was of course understood  
> intuitively, but there was a jumble of confusion, with many  
> varieties and subspecies of defaultitude, all loosely based on  
> linguistic intuitions, none of them clearly better or worse than  
> any other.

I mean Reiter's default logic defined in terms of fixed point  
equations. Well understood. Precisely characterized.
>> I'm not sure why you feel this is important. But I don't think we  
>> have to go "assertional", at least as  I understand it.
> Well, monotonicity is defined in terms of entailment which is  
> defined in terms of truth. But I see now that you have been using  
> 'consequence' as distinct from 'entailment' throughout.

I do this for your comfort.

>> Yes, of course, SPARQL in some sense "asserts" stuff about various  
>> RDF KBs. But SPARQL doesn't allow you to assert things (as we  
>> currently use it) about other SPARQL queries. That is, we only see  
>> SPARQL in consequents (just as in the relational calculus we only  
>> see certain constructs in the consequents). It's pretty common to  
>> say that this makes SPARQL a query language :)
> I agree: I thought this idea - of treating SPARQL as an assertion  
> language - was what *you* were urging.

SPARQL queries are never queried. They are only entailed. The  
language of "SPARQL" antecedents are RDF and of SPARQL consequences  

No syntactic change in an antecedent affects a given consequent. They  
in no way syntactically coupled.

>>> An RDF graph is a set of triples. Adding a triple to a set  
>>> creates a *different* set.
>> But doesn't change the query.
> It changes what the query refers to,

Which is not a change in the syntax of the query.

> so it does change the query, implicitly.

If you mean that the very same query is not a consequence of the new  
dataset, sure. It is syntactically unchanged.

> The query always refers to the query graph.

	ASK {?s p o}

Does not contain any reference to any queried graph. I can evaluted  
against any arbitrary RDF graph. It will evalaute to true or false  
against each graph.

> Adding a triple to that graph gives a different graph. That means  
> that the query itself has changed in what it refers to.

There is no syntactic change. The query is exactly unchanged. What it  
evaluates to changes, as would be no surprise. If the new graph is a  
superset of the old graph, this very same query might evaluate to  
false. Which means, y'know, it's not a consequent.

>>> So when we start talking about monotonicity, we need to be very  
>>> careful what it means to add a triple to a graph. In RDF this is  
>>> simply conjunction, but if the language itself can talk about  
>>> these sets of triples, then adding a triple to the antecedent  
>>> graph is a lot more than a simple logical conjunction: it  
>>> involves a change to the antecedent.
>> Pat, I think you got a bit lost here. If I have the propositional  
>> formula A and A |- B and then I conjoin something to A, e.g., to  
>> form A&C, then, well, I have a change to the antecedent.  
>> Trivially, right?
> Right, but that trivial sense is not what I meant.

But, alas, that's the one that's relevant.

> My point is that if B itself refers to A,

There is no syntactic reference.

There's no implicit syntactic reference.

In propositional logic, if I have a dataset D={A, B} and I have a  
"query" ?-C? D does not entail C. D'={A, B, C&D} *does* entail C. C  
is unchanged. It is a consequent of the second, but not of the first.

Now add a NAF negation \+. D does entail \+C and D' does not. \+C is  

> and if conjoining C to A *also* changes this reference to refer to  
> (A & C) rather than A, then B has also changed. So the correct way  
> to describe the result is not (A & C) |- B but rather (A & C) |-  
> B', where B' looks very similar to B but means something subtly  
> different.

Or, as is more standard and I would say sensible, say that B is no  
longer entailed because e.g., some model of A&C doesn't model B.

>> The question is what sort of change and what happens as a result  
>> of that change. In a monotonic logic if A |- B. then so with A&C |- 
>> B. an {A,C} |- B.
> See above.

See Makinson. It might really help.

>>> This is what I meant by deleting something: again, I apologise  
>>> for the misleading carelessness in my expression.
>> Even if I conceptualize adding a conjunction as "Deleting A and  
>> then adding A&C", or "Deleting {A} and adding {A,C}" it is still  
>> the case that these formula have a specific relation which I shall  
>> imprecisely characterise as "A" is a subset of "A&C" (from the  
>> correspondence in our logics between "A&C" and {A,C}. What we  
>> don't have is any assertions of the form "C is not asserted (or  
>> implied)". That is, "A" is not shorthand for {A, C is not  
>> asserted, or implied}.
> I entirely agree. That whole line of thinking is aside from my point.

I don't see how.

>>> I think not. At the very least it would require a model theory  
>>> for epistemic reflection.
>> I suspect the sticking point is that you want to reserve  
>> "entailment" for the semantic consequence relation.
> Um, yes. I believe that is what the word means.

Well, there isn't a single semantic consequence relation. We can vary  
the one under consideration in a number of ways. However, sometimes  
it's very helpful to consider consequence relations more abstractly.  
This is quite common in non-mon logic.

>> Fine. We do have that (see Axel's paper).

Oh yeah, we have that!

>> See above. It seems very odd to think that because the *question*  
>> is more sophisticated we've changed the *data*.
> Well, I think "the data" is changed as soon as anything has the  
> ability to refer to the data and talk *about* it rather than simply  
> draw conclusions *from* it.

I don't see how this can possibly be true. I haven't changed the  
language of Emma because I ask a question about it in French. I don't  
change any plot events if I ask how many pages.

> If I can do something as simple as count the number of triples in a  
> graph, its not just RDF any more.

I'm not able to assert that in the graph. So, nothing I write in  
sparql can change, e.g., the satisfiability of my dataset.

SPARQL, however, is  (semantically) sensitive to features of my  
dataset that I cannot express in RDF.

Similarly, if I have an ABox:
	bob loves mary.
	bob loves john.
	john != mary.

I can conclude that bob is an instance of the expression has min 2  
loves successors. min 2 doesn't exist in RDF. The existence of min 2  
doesn't change RDF. There is, of course, a superset language that  
tells us how min2 is sensitive to models....so?

>>> Its the one where you agree that a dataset really is a *set*, not  
>>> a thing with state which can retain its identity while having  
>>> information added to it.
>> As should be clear above, no one is appealing to stateful things.
> Are you sure? Isn't the 'query set' something that can grow or  
> shrink, so not actually a set?

Saying it can grow or shrink is meant as loose talk for "replaced  
with a super or sub set".

>> The nonmonotonicity is expressed in terms of relations between  
>> *sets* not in terms of mutation.

What I said here.

>>> I dont want to make that change, but I still disagree that the  
>>> consequence relation is nonmonotonic, if you state the truth  
>>> conditions for the SPARQL conclusion precisely, and in a form  
>>> which acknowledges the specs requirements that an RDF graph (note  
>>> the singular) is a *set* of triples. And of course that adding  
>>> something to a set gives you a new set, not the original set  
>>> 'enlarged'.
>> I really don't know where you're getting that from.
> That last sentence is plain old set theory. A union B =/= A, unless  
> B is a subset of A already.

No, I don't know why you think I meant anything else. I have no idea  
why you are attributing mutation semantics to our fairly standard use  
of "grow", "shrink", and "expansion".

The reason doesn't matter. Rest assured that nothing I've written  
depends on mutable sets.

>> By "expansion" I mean the belief revision theory operation (http:// 
>> en.wikipedia.org/wiki/Belief_revision) which does not involve  
>> mutation.
> ? Of course it does. Any change to a set is a "mutation".

Mutation implies identity through change (which is what you were  
harping on). Belief sets are characterized entirely set  
theoretically. The result of an AGM expansion is (typically) a new  
set (if you add something already in the set, obviously,you'll get  
your original set).

>>> But it does have its problems, such as hallucinations of  
>>> nonmonotonicity... :-)
>> I really hope at this point you see that there's no hallucination.
>> OTOH, I don't know why the question wasn't immediately settled  
>> when Axel pointed to his paper which showed an answer preserving  
>> reduction from SPARQL 1.0 to Datalog with stratified negation.
> Basically because Im not very interested in Datalog semantics, and  
> indeed feel like I ought to use scare quotes when typing that  
> phrase. But I agree this isn't a very persuasive point to make in a  
> public debate.

I don't see it persuasive in any debate. I trust it's obvious that  
it's not a support for "you  have no model theoretic semantics" to  
say "because I don't like the model theoretic semantics you use".

>> That just seems to answer every point you might make.


>> Sure, but then that's a different consequence relation. I can  
>> define consequence relations using the RDF model theory in a  
>> number of ways:
>> 	A) I can restrict the syntactic form of the consequences
>> 	B) I can change how the consequence relation is sensitive to the  
>> semantics of the graphs
>> 		E.g., finite vs. infinite models
>> 		Minimal vs. all models
>> 	C) I can extend the syntactic form of the consequences
>> 		I can make the extended syntax sensitive to various aspects of  
>> the RDF models (i.e., combine with B)
>> 			E.g., I can count successor nodes in a relational structure.
>> 			I can pay attention to certain classes of model (as above).
>> Given a "standard" consequence relation (say, simple entailment),  
>> I can use these techniques to define a supra-standard (has more  
>> consequences) or sub-standard standard (has fewer consequences)  
>> relation or suprasub (missing some and adding some). Furthermore,  
>> if I have a supra relation, i can change the monotonicity of the  
>> relation wrt to the added consequences while leaving the  
>> "classical" or "standard" consequences alone. See (propositional)  
>> default logic.
>> It's reasonable to regard sparql as characterizing a supra-sub  
>> relation. We restrict some cosnequences (e.g., so that any  
>> particular query characterizes at most a finite number of answers)  
>> and we extend others (by having a construct with is sensitive to  
>> non-entailment, e.g., !bound). Those extended consequences can  
>> increase *or decrease* when the queried against KB merely  
>> increases (that is, we replace it with a superset).
>> Thus, again, that consequence relation is non-monotonic.
> Ah, I see. True, of course, nonmonotonicity comes cheap

? This thread feels expensive.

> when you take this much freedom to tinker with consequences.

Like, the way which is completely bog standard? See wikipedia,  
Makinson, Stanford Encyclopedia of Philosophy, arbitrary logic  
programming or database books etc.

> I was, all along, intending to refer to monotonicity with respect  
> to a genuine entailment relation,

You  mean a consequence relation which doesn't characterize SPARQL?  
But why would that be relevant?

(And, see Axel's paper.)

(And, of course, not everything in logic needs to correspond to  
entailment, genuine or otherwise. We can have consequence relations  
which are sub or supra relations to classical, first order entailment  
characterized consequence relations. It's just maths in the end.)

> where entailment is defined in terms of truth conditions.

Truth conditions need things which have them. I.e., sentences,  
characterized syntactically.

And see Axel's paper. !bound's semantic conditions are in terms of  
(for example) minimal models. As is no surprise, such a consequence  
relation is non-monotonic.

> Anything else is just computational hacking

Regardless of your feelings about it, I don't think you can both  
claim that you are appealing to *standard* notions of non-monotonic  
logic *and* that all the standard ways of characterizing are  

> on top of the actual semantics (which is fine: some of my best  
> friends do it; but it isn't entailment.)

Let me end with a reflection: I totally fail to see that you have any  
analytic benefit from saying this nor clarity or pedagogic benefit.  
Contrariwise, it seems like you have to bend your self into nasty  
shapes to maintain this (preference model semantics aren't semantics  
(screw finite model satisfiability), antecedents and consequent  
languages must be uniform, !bound implicitly, though explicitly,  
mentions datasets, etc.

I don't see how you can read any papers on non-monotonic logic. At  
all. Substructural logics must hurt you as well.


(image/jpeg attachment: dproves.jpg)

Received on Thursday, 9 September 2010 18:38:02 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:33:53 UTC