W3C home > Mailing lists > Public > semantic-web@w3.org > July 2006

RE: Semantic content negotiation (was Re: expectations of vocabulary)

From: Xiaoshu Wang <wangxiao@musc.edu>
Date: Sat, 29 Jul 2006 23:52:51 -0400
To: "'Reto Bachmann-Gmür'" <reto@gmuer.ch>
Cc: "'Semantic Web'" <semantic-web@w3.org>
Message-ID: <001a01c6b38b$a1633fa0$0a241780@bioxiao>

> continuing the discussion,  

O.K. But I will take it off from HCLSIG.

> That's right, but not an issue for subgraphs as according to 
> RDF-Semantics a graph entails all of its subgraphs.

That is the problem.  Two translations of the same article is different from
two parts of the same article.  
> > Again, take the following example, dereference a URI would 
> return an 
> > RDF of the following,
> >
> > <> n1:x1 n2:x2 .
> > n2:x2 n3:x3 n4:x4 .
> > n4:x4 n5:x5 ...
> > ...
> >
> > If each ontology/namespace has at least an alternative 
> > ontology/namespace, think about how you are going to make 
> the header 
> > to handle n^2 possibilities.

This is the general usecase I was talking about because in theory you can
get an RDF description described in vocabularies from many ontologies.

> Again, the situation is identical as for accept and 
> Accept-language, if you think the average semantic web 
> application will understand a massively higher amount of 
> vocabularies than the average traditional web application 
> understands mime-types then this has to be taken into account 
> when defining such an extension. 

Not really identical.  If you think about the those who specified HTTP
protocol.  What they imagined is there are multiple files laying on the
server.  And the HTTP server will pick the most correct one and return it
back acoording to the Accept header.  But even if with one RDF document
composed of four ontologies, and each one has an alternative, there would be
16 versions of it.  This will put a huge burden on ontology developer.  I
don't think anyone, like me, would have honored that.

> > If you have a limited usecase, like the Atom/RSS cases, go 
> ahead and 
> > specify your own convention.  But I don't think it is worthy 
> > consideration for more general cases.
> >   
> No it was just an example, if you look at the existing 
> ontologies you'll see that overlapping ontologies are more 
> the rule than the exception (try looking for calendaring, 
> geography, picture or news ontologies).

People who developed ontology hasn't really think about how they are going
to be shared yet.  But this is an engineer issue (I would have a lot to say
about this but will skip it here).  However overlaping ontology is a totally
different and more difficult problem than yours, which is alternative

For alternative ontology, let them fight.  If both survive, their ontology
will provide a mapping or a third party mapping ontology will exist.  I
don't think it is a good idea to put the burdurn on ontology developer. It
is cheaper and more efficient that way either, right? I.e., has o1, o2,
mapping_o1_o2, + all description used in either o1 or o2.  compared to o1,
o2 and all descriptions in both o1 and o2.
> > http://eg.com/foo
> >
> > _:x http://bar.com/newfoaf#knowsWell _:y .
> >
> > The agent takes this statement should further dereference the 
> > http://bar.com/newfoaf#knowsWell and probably returns back 
> a statement 
> > like,
> >
> > http://bar.com/newfoaf#knowsWell rdfs:subPropertyOf foaf:knows .
> >
> > It is not at http://eg.com/foo where the inference is done.
> >   
> Sorry, that's nonsense. Not only property URI's are not 
> necessarily dereferenceable and the possibly available graph 
> representation may or may not contain that statement - do you 
> know about any FOAF client behaving as you're suggesting it 
> should? I don't, and I know I wouldn't want to install it on 
> my mobile phone of limited resources.

Are you sure you are talking RDF? The only thing that is not dereferencable
is literal values because they are not URI.  But literal can only be an
object, not subject and property.  

I don't know too much about FOAF.  But I do know FOAF does not deploy its
ontology at its namespace (is that why you said a property is not
necessarily dereferenable?) and I think this practice is a very bad
practice.  Because it assumes agent to have preexisting knowledge to work
with something, it sort of make the open-world somewhat closed.  IMHO, it is
bad and very bad.

But if after a few years of competing and two competing ontologies surve,
they will.  Even if they don't, there must be a mapping ontology
somewherelse.  The agent should know it.  Even if at client side, he can
recommend that site in his/her RDF with a pointer to the mapping ontology.
But why has he to write his/her discription in muliple versions?   

> Even if your solution (inference is the business of the 
> client, properties must be dereferenceable, client should do 
> this recursively till they cannot understand more) would 
> indeed be implemented it addresses the issue of unnecessarily 
> transferred triples only partially, what if the client is 
> only interested in social relation but cannot do anything 
> with the postal address which is part of the abstract notion 
> of a personal profile.

In semantic web, you take what you can understand and ignore the rest.  But
now, you are getting back the issue that Danny has argued.  Let's not mix
these two up.

Problem 1: an RDF document can be written in multiple similar vocabularies.
Use Accept-vocabulary to ask the server to return the statement written in
certain vocabularies but not the other.

My position for this problem (let's call it alternative vocabulary problem).
It is O.K., fundamentally.  But I don't think it is practical.  It puts too
much burdern on ontology developer.  It is cheaper and easier doing this
sort of things at other places.

Problem 2: Let's call this subgraph problem.  I.e., the Accept-Vocabulary
ask the server to return only those subgraphs that the client request.

My position to this problem. No.  It is fundemantally wrong and we should do
it with a web service etc., using SPARQL.

Received on Sunday, 30 July 2006 03:53:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 07:41:52 UTC