W3C home > Mailing lists > Public > www-rdf-interest@w3.org > October 2004

RE: URIQA thwarted by context problems? (NOT)

From: <Patrick.Stickler@nokia.com>
Date: Tue, 12 Oct 2004 11:45:46 +0300
Message-ID: <1E4A0AC134884349A21955574A90A7A56471B2@trebe051.ntc.nokia.com>
To: <r.newman@reading.ac.uk>, <pdawes@users.sourceforge.net>
Cc: <www-rdf-interest@w3.org>




> -----Original Message-----
> From: www-rdf-interest-request@w3.org
> [mailto:www-rdf-interest-request@w3.org]On Behalf Of ext 
> Richard Newman
> Sent: 11 October, 2004 17:23
> To: Phil Dawes
> Cc: RDF interest group
> Subject: Re: URIQA thwarted by context problems? (NOT)
> Replies in-line.
> -R
> On Oct 11, 2004, at 11:39, Phil Dawes wrote:
> > However I'm not sure that dc:title, rdfs:label, comment 
> etc.. are good
> > litmus tests for this, since the functionality applied by 
> an agent is
> > usually to render the value in some way for a human to read. This
> > allows for a wide scope of error in the semantic agreement (e.g. the
> > property value might not be a commonly expected/accepted 
> label for the
> > resource, but that doesnt matter much because a human gets to
> > interpret it).
> At some point, every agent has human interpretation of semantics 
> embedded in its functionality. Often this is display, but it doesn't 
> have to be. Display is only a special case because you can throw the 
> properties at the screen if you don't understand them, but otherwise 
> the problem is the same.
> If inference can shift the unknown terms towards the integrated 
> interpretation, then the agent can be said to be autonomous. If it 
> can't, then the agent can't use those terms, because it cannot 
> meaningfully relate them to its hard-coded functionality*.
> * of course, the specific cases of ontology editors and 
> generic viewers 
> can relate _any_ term to their hard-coded functionality --- 
> displaying 
> properties and classes --- and therefore never have a problem! :D
> > E.g. somebody mints the term 'foo:Server', describing it as 
> a computer
> > server. In actual fact the implicit context in which the 
> term is used
> > (by software) means that it is also a unix server, a server owned by
> > the company, a server managed by a particular team, a server used to
> > host databases etc..
> Ontology engineering is hard, and most people are bad at it 
> (or rather, 
> describe a specific instance in a naive way, because it works 
> for that 
> particular task --- e.g. BibTeX).
> > Having said that, the fact that my examples relied on imprecision in
> > the shared ontologies is heartening to me, since that 
> implies that if
> > people restrict their 'trust' of ontological statements to those
> > directly referencing a 'high quality' (i.e. precise and well
> > understood/deployed) schema, there is a heightened chance of being
> > able to usefully utilise these new statements.
> > It might even be possible for the agent to present some metric of
> > precision to the user by counting the number of levels away from a
> > 'high quality' schema that a term is defined.
> I think that there is a selection pressure on ontologies to 
> be general 
> and accurate, which will probably lead to "weak" ontologies, such as 
> Dublin Core: no ranges, no domains, no symmetry, no inverses. This is 
> not necessarily a bad thing --- I'd rather lose some entailments than 
> get some flawed ones in there as well. We'll see what happens, I 
> suppose!
> Some of the 'quality' aspects I think will be embodied in the 
> hard-coded behaviour of agents.
Received on Tuesday, 12 October 2004 08:47:51 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:10 GMT