Re: Semantic content negotiation (was Re: expectations of vocabulary)

Xiaoshu,

continuing the discussion,  as I think that the issue is crucial for
implementing software filling the gap betwwen the traditional and the
semantic web.
>> I implemented exactly this in KnoBot [1]: When requesting a 
>> page with multiple articles where not all articles are 
>> available in the same set of languages a polyglot user with 
>> multiple languages in the Accept-Language header may get a 
>> page with the navigation and some articles in his prefered 
>> language and other articles in an language with a lower q-value [2].
>>     
>>> [...]
>>>       
>
> Reto, I never said you cannot.  But it won't be a standard practice.  It is
> a special application logic that does not apply to general cases. Agree?
>   
Why? The situation of partially translated content and polyglot users is
very common outside the US. The search results of search engines is only
one example.
> Besides, your article is a collection of article, that logic can applies.
> If I have a sentence.  "Hello, [Xiaoshu]", with [Xiaoshu] in Chinese,
> returning only "Hello" according to Accept-Language:en. would be outrageous
> wrong. 
>   
That's right, but not an issue for subgraphs as according to
RDF-Semantics a graph entails all of its subgraphs.
>   
> [...]
> Again, take the following example, dereference a URI would return an RDF of
> the following,
>
> <> n1:x1 n2:x2 .
> n2:x2 n3:x3 n4:x4 .
> n4:x4 n5:x5 ...
> ...
>
> If each ontology/namespace has at least an alternative ontology/namespace,
> think about how you are going to make the header to handle n^2
> possibilities.
>   
Again, the situation is identical as for accept and Accept-language, if
you think the average semantic web application will understand a
massively higher amount of vocabularies than the average traditional web
application understands mime-types then this has to be taken into
account when defining such an extension. But I don't know why you assume
the number of supported vocabularies to be such high, I understood you
earlier that you assume such a high convergence of vocabularies that
content negotiation is not an issue (everybody speaks foaf, the rest is
exotic) - puzzled.
>> A too narrow notion of the range of HTTP URIs as files or set of files
>> doesn't make it a good basis  for RESTful semantic web 
>> applications. The
>> HTTP namespace names things like ontologies for which the existing
>> content negotiation mechanisms are extremely limited.
>>     
>
> You can design a hammer not limited to be just a hammer. You can "extend" it
> to make it a saw, and a drill, and a screwdriver, and a stop watch, and a
> radio.  The end result of it is this super tool won't help to do one thing
> good.  The HTTP is limited because it is designed to be so.  That is not a
> valid argument.  Once again, take my example usecase as an example, and
> propose a solution and see (1) if it is practical and (2) how won't it
> overlap with SPAQL?  
>   
You general arguments applies much more for the gap from HTTP 1.0 to
1.1. HTTP has something which works for documents and for natural
language which doesn't works fro graph. Of course you may say "Invent a
Graph-Transfer-Protocol for this" but I think its good to see the
semantic web as an extension to the current web and not as something
which can be tunnelled over it.

SPARQL is a very versatile query language, but hey, everything that can
be done over HTTP could be done with some remote SQL-queries. The reason
for Vocabulary-Negotiation is to have a way to dereference resource best
represented by graphs as easy as dereferencing resource represented by a
set of documents.

As mentioned earlier, SPARQL requires the server to handle operations of
non-polynomial complexity, while the extension requires some triple
filtering which is feasible in linear time.

> If you have a limited usecase, like the Atom/RSS cases, go ahead and specify
> your own convention.  But I don't think it is worthy consideration for more
> general cases. 
>   
No it was just an example, if you look at the existing ontologies you'll
see that overlapping ontologies are more the rule than the exception
(try looking for calendaring, geography, picture or news ontologies).
 
>> But synonyms are only one case, say I choose to use more specific
>> subproperties of foaf:knows to describe social relations: while I'm
>> confident that the trendy ontology I'm using will soon be wide spread,
>> my server uses inferences to add inferred statements with 
>> foaf:knows. In
>> this case either the statements with foaf:knows or the one using the
>> subproperty of it will not be of any use to the client, vocabulary
>> negotiation could avoid this.
>>     
>
> If you invent a new property say - new:knowsWell.  And you have a statement
> from
>
> http://eg.com/foo
>
> _:x http://bar.com/newfoaf#knowsWell _:y .
>
> The agent takes this statement should further dereference the
> http://bar.com/newfoaf#knowsWell and probably returns back a statement like,
>
> http://bar.com/newfoaf#knowsWell rdfs:subPropertyOf foaf:knows .
>
> It is not at http://eg.com/foo where the inference is done.
>   
Sorry, that's nonsense. Not only property URI's are not necessarily
dereferenceable and the possibly available graph representation may or
may not contain that statement - do you know about any FOAF client
behaving as you're suggesting it should? I don't, and I know I wouldn't
want to install it on my mobile phone of limited resources.

Even if your solution (inference is the business of the client,
properties must be dereferenceable, client should do this recursively
till they cannot understand more) would indeed be implemented it
addresses the issue of unnecessarily transferred triples only partially,
what if the client is only interested in social relation but cannot do
anything with the postal address which is part of the abstract notion of
a personal profile.
> Please, let's stop this subject.  
>   
There's no need for a consensus on this ;-)
> I think I have expressed enough of my
> concern.  If you think you find a general solution to the general use case I
> gave above.  Make you proposal.
>   
Reading you mail, not sure where the general use-case is.

Received on Friday, 28 July 2006 19:34:55 UTC