Re: SPARQL Protocol for RDF

> I'm not entirely sure what you mean here. DESCRIBE allows you
> to "point at some URI" for a description, but you can't ever
> be sure what an arbitrary SPARQL processor will return for
> such a description.

I meant being able to point at a uri describing what kind of description 
is either requested or given.

>
> Well, I think that anyone who has had any experience with SGML
> can tell you about what "optional features" can do to the adoption
> and consistent implementation of a standard.

People need lists, i see no way around them .
Named graph on the other handn was the perfect topic to put in the 
optional section of the language.

> I'd rather just leave such "optional features" out of the standard,
> and make it clear that individual implementations are welcome to
> include value added functionality, and if the industry converges

MM i am led to think differently
.. individual implementations (Say oracle) are welcome to make certain 
features EFFICENT so they win an enterprise markets while researchers 
etc stick with the slow free versions.
If fundamental features are not in the standard and there is a number of 
real world talk that make people go for proprietary versions.. well i 
think this fails the task alltoghether. People will write different 
surrounding code etc.

If "difficult to implement efficently " features had been standardized 
on the other hand, it would have been just a matter of performance, like 
i said, slow for the free versions fast for the commercial ones.


> Also, note that the recursive functionality needed to obtain CBDs
> can also be provided by a rule layer working in conjunction with
> SPARQL, and might very well be better addressed at such a layer.
>
an external function is more like it

>>
>> In a partly unrelated matter, does anyone know how can one cope in  
>> sparql with contexts being more than 1?
>> Say one uses NG to indicate who is the author. Ok..  then after some  
>> time one wants to distinguish also  between the "red" triples and  
>> "blue" ones, or from other facets such as the original site where 
>> they  were posted et. or whatever. Should one exponentially multiply 
>> the  number of named graphs (creating new  graphs like 
>> fromGiovanni_red    fromGiovanni_blue ) (facetvalues)^(number of 
>> facets) , make a number  of graphs equal to the triples (in case of 
>> fuzzy trust values for  example) or simply duplicate triples once per 
>> aspect. (the same triple  should appear in the giovanni graph AND in 
>> the red graph and of course  i should remember where to delete it 
>> from the red graph when giovanni  revokes it as well).
>> is there a best practices suggestion for this already?
>
>
> I myself have no hard experience in this area, but for what it's
> worth, if I were approching this problem tomorrow, I would maintain
> named graphs according to the source/management of the data, and
> as needed, infer other graphs (various intersections) by rules
> or other machinery

By using the graph names as sources you're sticking to one, arbitrary, 
possible context.
In our case (dbin) this was of no use since information is freely 
replicated along the network so it doesnt really matter where you have
gotten them from its more who wrote them to begin with and that's why we 
came up with context attached to a lower level (MSG) and signature 
sticking there, so that they're attached to the graph as they move in 
the P2P syste.. context is not lost, it is a local property of the triples.
see our recently released RDF Context tools 
http://www.dbin.org/RDFContextTools.php. Slow but deliver what at least 
our scenarion seems to need, real context attached close to the statements.

> Thus, most/all of the above would be functionality I would
> encapsulate in an API/Toolkit for working with named graphs,
> and not try to capture explicitly/persistently in the
> knowledgebase.
>

Information and context information has to be in the same knowledbase, 
there must be a way to dump it all in a single RDF or in case of NG i 
believe the suggestion is to use zip files with a bunch of RDF. and some 
xml to merge it all if i am not mistaken.
Without clearly defining where what the state is and where it resides 
basic software engineering principles are violated.

>
> sufficient deployment and experience with SPARQL services,
> that the industry at large will appreciate the broad utility
> of CBDs and even standardize on that form of description
> as a default response to DESCRIBE. We'll see...
>
> .......

>
> It's not either-or. It's which (or both) are best for a given
> application.


yes i agree it will go ok eventually. I just also sure that NGs will 
have their proper use and all .. but time and consensus is needed.  Or 
maybe not , since they're  being put the core of the first official RDF 
language?

happy vacation! :-)

Received on Saturday, 4 June 2005 23:49:50 UTC