Re: Putting metadata in the "default" graph Re: Dataset Syntax - checking for consensus

On Sep 26, 2012, at 13:35, Lee Feigenbaum <lee@thefigtrees.net> wrote:

> On 9/26/2012 1:25 PM, David Wood wrote:
>> Hi Lee,
>> 
>> On Sep 26, 2012, at 12:41, Lee Feigenbaum <lee@thefigtrees.net
>> <mailto:lee@thefigtrees.net>> wrote:
>> 
>>> On 9/26/2012 12:09 PM, David Wood wrote:
>>>> * Some designs for carrying for metadata
>>>> 
>>>> PROPOSED: In our dataset syntax, we'll say that metadata goes in the default graph
>>>> +0.5, especially if it can be aligned with SPARQL service descriptions.
>>>> 
>>>> 
>>> 
>>> What do existing systems do when importing a TriG file that contains
>>> data in the "default" graph? The Anzo store has /no default graph/,
>>> and therefore either throws an error or throws away any information in
>>> a TriG "default graph". Similarly, all TriG exported from Anzo does
>>> not have a default graph /unless /it's the serialization of a SPARQL
>>> RDF dataset (which//by definition does have a default graph, of course).
>>> 
>>> I bring this up because I brought up a related thread on
>>> public-sparql-dev recently:
>>> 
>>> http://lists.w3.org/Archives/Public/public-sparql-dev/2012JulSep/0025.html
>>> 
>>> In that thread, I asked:
>>> 
>>> """
>>> Do all quad stores  / named graph stores include a default graph? If
>>> the store that you develop or use does have a default graph, does that
>>> graph also have a name (URI)?
>>> """
>>> 
>>> The answers were:
>>> 
>>> Anzo -- no default graph (except ones assembled on the fly for querying).
>>> OWLIM -- has a default graph with no URI
>>> RDF::Query -- has a default graph with no URI
>>> 4store & 5store -- default graph is a view on existing graphs (&
>>> therefore, I assume, doesn't exist for purposes of /storing /data) --
>>> uses a "special" named graph for writing default data
>>> TDB -- can either have an actual default graph or just use the default
>>> graph as a view onto the other named graphs
>>> 
>>> Additionally, there was input from 3 implementers (SteveH, GregW, and
>>> Chime) that if they could re-implement their systems they would not
>>> include a default/unnamed graph.
>>> 
>>> All of which is to say, I think there's a fair amount of evidence that
>>> the "default" or unnamed graph is not consistently used, and perhaps
>>> not widely used. We need to support it for compatibility, but I think
>>> it's a mistake to specify that anything important be put in that graph.
>> 
>> I certainly agree that default graphs are used in inconsistent ways in
>> existing systems and that the idea of a default graph in a system
>> backing an HTTP or SPARQL endpoint is generally a bad idea.  However,
>> that is not what is being proposed.
>> 
>> The proposal deals with syntax within a TriG-like document, specifically
>> where metadata describing a graph goes.
> 
> Isn't it about where metadata describing the _dataset_ goes?

Yes, apologies.  I just violated my own rule about never using the term 'graph' ;)  It was unintentional.

> 
>> It does not suggest where that
>> metadata would be stored in a system that parses such a document.  It
>> most certainly does not suggest that metadata in a default graph in a
>> TriG-like document be put in the default graph in a system that parses
>> such a document.
> 
> Well, but what I'm saying is that data in the named graph parts of a TriG document is going to be used pretty consistently when put into a store. If I take a TriG doc and load it into my favorite store, I can pretty predictably figure out how to access the data from the named graphs via SPARQL. I can't do the same thing with data that was in the unnamed TriG graph.
> 
> So, given the choice, I'd rather put information into named graphs in the TriG doc then into the unnamed graph. I'd even rather invent something entirely new (like a @meta directive) such that I can specify what ought to be done with it rather than rely on the inconsistently used default/unnamed graph.

I agree completely.  Eventually, we need to figure out how to provide guidance to the developer community about where they (we) should store metadata about datasets found in a TriG-like document.  That guidance will be useless unless we can SPARQL the results (a point Andy made first).

Regards,
Dave


> 
> Lee
> 
> 
>> It would be nice (IMO) if such systems had a standard place to describe
>> the graphs that they ingest.  That could happen via server-side
>> implementations of SPARQL service descriptions or in some other way.
>>  Right now, it happens in a wide variety of ways, many of which are
>> out-of-band to an underlying RDF store.  Clearly that is an area ripe
>> for standardization since almost everyone does it, but in non-standard ways.
>> 
>> Regards,
>> Dave
>> 
>> 
>> 
>>> 
>>> Lee
>>> 
>> 
> 

Received on Wednesday, 26 September 2012 17:41:53 UTC