Re: SPARQL 1.1 Update

> Actually, I was seeing blank nodes in this sense as being similar to
> an unbound variable, rather than giving it a different meaning. After
> all, interpretations on a graph with blank nodes can associate those
> nodes with anything (so long as the graph remains consistent). So
> treating them like unbound variables just seemed like a natural
> approach to me. (it would also trivial to implement.  :-)

Trivial so long as users cannot expect:

   DELETE {
     <x:y> <z:x> _:x .
     _:x _:z "Foo" .
   }
   WHERE {
     ...
   }

to work like a query pattern (where both _:xs unify). Otherwise that's  
a two-phase query...

>> I mildly favour 3.  This is (1) without the enforcement.  Parsers  
>> may choose
>> to emit a warning
>
> When viewed as "(1) without the enforcement" then I see what you're
> getting at. However, I'm less comfortable with the semantics, in that
> it implied that blank nodes refer to nothing. That feels like I'm
> skolemizing them to something that doesn't exist.

I actually think this is more consistent: all 3 of CONSTRUCT, INSERT,  
and DELETE have the same template format, and admit blank nodes,  
causing the effect of generating a new blank node set for each row on  
instantiation... it's just that for DELETE this will never cause any  
deletion to occur. The blank nodes refer to *something*, just not  
something in the graph, which is why no deletions occur.

Practically speaking, then, no user will ever want to do this, but at  
least it's simple and consistent.


>>>> * Re atomicity: it would seem that, for systems which will allow  
>>>> multiple
>>>> SPARQL/Update requests within a single transaction

<snip>

>> Some terminology confusion perhaps.  A "request" is several  
>> "operations" and
>> one request is one HTTP POST.  Need a terminology section - this is  
>> still
>> outstanding from my WD comments.

I'm not confused by the terminology -- I'm asking about the situation  
where requests are not issued over HTTP, or where a system allows the  
initiation of a transaction which can encompass multiple requests.

(Think of a relational database. BEGIN TRANSACTION, then run queries,  
add data, run more queries, delete data...)

An HTTP example:

POST /transaction/begin
POST /update
POST /update
POST /transaction/commit

In such a system, one would ordinarily expect the transaction as a  
whole -- which contains two Update requests, each consisting of  
multiple operations -- to be atomic, and the individual Update  
requests cannot be individually atomic without supporting sub- 
transactions.

In an interactive or programmatic system, one might even want the  
opportunity to respond to a failure in the second update without  
automatically aborting the whole transaction.

Phrased differently... not everyone uses HTTP, and not every  
application use-case fits into a single SPARQL Update request. (What  
if you need to do some computation before deciding whether to commit  
or rollback, or before adding more data?)

I would like to see the language in the spec address the existence of  
the broader world view, rather than the narrow trivial circumstance of  
one-transaction-per-request, even if it doesn't mandate a required  
behavior. Simply specifying "a SPARQL Update request should be atomic"  
is simultaneously too restrictive for implementations and too vague to  
be useful to users.

Hope that helps clarify, and thanks for the responses, Andy and Paul.

-R

Received on Thursday, 14 January 2010 22:41:02 UTC