Re: Semantic content negotiation (was Re: expectations of vocabulary)

Xiaoshu Wang wrote:
>> The feature is hardly implementable with traditional 
>> file-based webservers, but what's the trade off? They may 
>> ignore the Accept-Vocabulary header as most webservers ignore 
>> the Accept and the Accept-Language header.
>>     
>
> The HTTP protocol is not designed to do content partition.  
Not sure, after all we have Byte Ranges for binary content.
> Of course, it
> cannot carry the task.  As I wrote in the latter part of that message, the
> trade-off is breaking the orthogonality of protocols by asking a
> transportation protocol to do query.
>   
It's not about about (arbitrary) queries but about selecting the
representation. Representation of a resource may be incomplete, so if G
is an appropriate representation of R every subgraph of G is as well.
Media types seems unsuitable to select the most appropriate
graph-representation.
> The difference between the Accept header and Accept-vocabulary is that the
> server can ignore the former but not the latter.  If a client try to get a
> JPEG image but get back a PNG instead.  The client can still figure it out
> due to different MIME type retruned.  How can a client know if the returned
> RDF graph is what he wanted if the server has the option to ignore the
> Accept-Vocabulary? 
>   
The client may get more triples, but what it wants is always part of the
response. While it could be a MUST-level requirement for client using
the Accept-Vocabulary header to accept arbitrary triples, for the
Accept-Header things are more difficult as a not-acceptable response may
not be interpretable at all. Furthermore the server is free to return a
406 response or a response which doesn't match the Accept-Header.
>   
>> If you send the following HTTP-Request to dannyayers.com:
>>
>> GET / HTTP/1.1
>> Host: dannyayers.com
>> Accept: application/x-turtle
>> Accept-Language: en
>>
>> You'll get a lot of triples your client probably can't deal 
>> with, if Danny turns on inference on the server you would get 
>> many triples the client could infer itself, as more RDF is 
>> transferred over HTTP plain serialization negotiation will no 
>> longer be enough.
>>     
>
> I am not sure if I understand you.  What you suggested seems more expensive.
> If the eventral RDF set were made from vocabularies of n different
> namespace.  The inference will be conducted at n places, instead of once (at
> client side).  Inference is not exactly a cheap process.  I am not sure
> where you are going at
Let's take Henry's example.

The server has the following 7 base triples:

<> a :CategoryList;
   :category [ :scheme <http://eg.com/cats/>;
               :term "dog" ];
   :category [ :scheme <http://eg.com/cats/>;
               :term "house" ].


doing OWL-Inference it can infere the following additional 7 triples:


<> a :McDonaldCategoryList;
   :McCategory [ :McScheme <http://eg.com/cats/>;
               :McTerm "dog" ];
   :McCategory [ :McScheme <http://eg.com/cats/>;
               :McTerm "house" ].

To allow clients knowing any of the two ontologies to understand the
response the server would deliver all 14 triples. If the requests would
have an Accept-Vocabulary header the client could in this case avoid
getting too many redundant triples. Of course the simple
Accept-Vocabulary-Header doesn't solve the problem of distributing
inference in the general case, whether or not inferred triples are
valuable to the client or a simple redundancy depends on the client
capabilities and of the cpu/bandwith ratio, additional headers could
give the server some hints.

Reto

Received on Wednesday, 26 July 2006 08:52:25 UTC