Re: why I don't like default graphs in the DATASET proposal

On 10/2/2011 8:18 AM, Richard Cyganiak wrote:
> On 1 Oct 2011, at 17:53, Ian Davis wrote:
>>>> But do you have a use case that would be solved by a dataset
>>>> with default graph, that a dataset *without* default graph
>>>> would *not* solve?
>>>
>>> Backing up the contents of a SPARQL store as a dump, and loading
>>> it into a different SPARQL store.
>>
>> Where is SPARQL store defined?
>
> A store that supports SPARQL. Hence its data model is an RDF
> dataset.

Anzo supports SPARQL, but doesn't really use the RDF dataset as its
model: there's no default graph in the store, only named graphs.

Default graphs are assembled dynamically for a query (based on FROM
clauses or default-graph-uri parameters); if no default graph is
specified in the query or protocol, then the query is executed with an
empty default graph. So the model we use is isomorphic to an RDF dataset
with an empty default graph, I suppose, but when we serialize an entire
Anzo store to TriG we don't explicitly include an empty default graph.
(Though I don't suppose that really matters...)

>> The reality is that most graph stores have names for all the
>> graphs but designate one as the unnamed one for the purposes of
>> SPARQL.
>
> That's not true. Many stores have the union of all graphs in the
> default graph. In this common case the default graph isn't just a
> named graph designated as the default.

Or, in our case, have no default graph at all.

> Also, SPARQL conformance doesn't require the model you describe.

Right.

> What's more important: If the default graph isn't marked somehow in
> the dump file on export, then there's no way for the importing store
> to tell which of the named graphs it's supposed to use as the
> default.
>
> That's why a dump format for SPARQL stores needs a marker for the
> default graph.

SPARQL 1.1 Update calls this a "graph store", by the way.

Lee

> Best, Richard
>

Received on Sunday, 2 October 2011 15:16:17 UTC