RE: Semantic content negotiation (was Re: expectations of vocabulary)

--Reto, 

> > The HTTP protocol is not designed to do content partition.  
> Not sure, after all we have Byte Ranges for binary content.

It is for the purpose of transportation.  Again, let's use UPS as an
example, byte range is like, O.K. give me the first three package first.
But not give me all the packages that contains books.  Because what is
inside a package is irrelevant to their tasks.

> It's not about about (arbitrary) queries but about selecting 
> the representation. Representation of a resource may be 
> incomplete, so if G is an appropriate representation of R 
> every subgraph of G is as well.

What is the definition of "appropriate"? By your logic, a picture of your
forehead is an appropriate representation of a face?  Selecting partial
representation is a totally different matter from selecting different kinds
of representation.

Of course, in an open world, no one has the complete information about a
resource.  So, all RDF is only a "partial" representation of the resource.
However, if this "partial" information is a complete or adequate
representation of the resource is the choice of the resource owner, who must
understand the consequence of his/her choice. 

Take HTML page as an example. If I want to put up a long article on the web,
I can elect to put them in two ways:

1. As one big html page under one URI
2. As multiple pages, each of which has a URI.

It is up to me to weight the tradeoff and make appropriate decision.  It is
an engineer/design issue but not a transportation issue.    

> > I am not sure if I understand you.  What you suggested 
> seems more expensive.
> > If the eventral RDF set were made from vocabularies of n different 
> > namespace.  The inference will be conducted at n places, instead of 
> > once (at client side).  Inference is not exactly a cheap 
> process.  I 
> > am not sure where you are going at
> Let's take Henry's example.
> 
> The server has the following 7 base triples:
> 
> <> a :CategoryList;
>    :category [ :scheme <http://eg.com/cats/>;
>                :term "dog" ];
>    :category [ :scheme <http://eg.com/cats/>;
>                :term "house" ].
> 
> 
> doing OWL-Inference it can infere the following additional 7 triples:
> 
> 
> <> a :McDonaldCategoryList;
>    :McCategory [ :McScheme <http://eg.com/cats/>;
>                :McTerm "dog" ];
>    :McCategory [ :McScheme <http://eg.com/cats/>;
>                :McTerm "house" ].
> 
> To allow clients knowing any of the two ontologies to 
> understand the response the server would deliver all 14 
> triples. If the requests would have an Accept-Vocabulary 
> header the client could in this case avoid getting too many 
> redundant triples. Of course the simple 
> Accept-Vocabulary-Header doesn't solve the problem of 
> distributing inference in the general case, whether or not 
> inferred triples are valuable to the client or a simple 
> redundancy depends on the client capabilities and of the 
> cpu/bandwith ratio, additional headers could give the server 
> some hints.

I am still not sure if I understand you.  What is the relationship between
:CategoryList and :McDonaldCategoryList?  Are they under the same namespace?
If so, how are they related to each other?  And why would someone build two
set of vocabularies to describe the same thing?

Or they are supposed to be under different namespace? So, the server that
you have in mind is much more an "ontology warehouse" than plain ontology?
I am still not clear what you intends to do?

Xiaoshu

Received on Wednesday, 26 July 2006 13:43:38 UTC