Re: URIQA thwarted by context problems? (NOT)

Replies in-line.

-R

On Oct 11, 2004, at 11:39, Phil Dawes wrote:
> However I'm not sure that dc:title, rdfs:label, comment etc.. are good
> litmus tests for this, since the functionality applied by an agent is
> usually to render the value in some way for a human to read. This
> allows for a wide scope of error in the semantic agreement (e.g. the
> property value might not be a commonly expected/accepted label for the
> resource, but that doesnt matter much because a human gets to
> interpret it).

At some point, every agent has human interpretation of semantics 
embedded in its functionality. Often this is display, but it doesn't 
have to be. Display is only a special case because you can throw the 
properties at the screen if you don't understand them, but otherwise 
the problem is the same.

If inference can shift the unknown terms towards the integrated 
interpretation, then the agent can be said to be autonomous. If it 
can't, then the agent can't use those terms, because it cannot 
meaningfully relate them to its hard-coded functionality*.

* of course, the specific cases of ontology editors and generic viewers 
can relate _any_ term to their hard-coded functionality --- displaying 
properties and classes --- and therefore never have a problem! :D

> E.g. somebody mints the term 'foo:Server', describing it as a computer
> server. In actual fact the implicit context in which the term is used
> (by software) means that it is also a unix server, a server owned by
> the company, a server managed by a particular team, a server used to
> host databases etc..

Ontology engineering is hard, and most people are bad at it (or rather, 
describe a specific instance in a naive way, because it works for that 
particular task --- e.g. BibTeX).

> Having said that, the fact that my examples relied on imprecision in
> the shared ontologies is heartening to me, since that implies that if
> people restrict their 'trust' of ontological statements to those
> directly referencing a 'high quality' (i.e. precise and well
> understood/deployed) schema, there is a heightened chance of being
> able to usefully utilise these new statements.
> It might even be possible for the agent to present some metric of
> precision to the user by counting the number of levels away from a
> 'high quality' schema that a term is defined.

I think that there is a selection pressure on ontologies to be general 
and accurate, which will probably lead to "weak" ontologies, such as 
Dublin Core: no ranges, no domains, no symmetry, no inverses. This is 
not necessarily a bad thing --- I'd rather lose some entailments than 
get some flawed ones in there as well. We'll see what happens, I 
suppose!

Some of the 'quality' aspects I think will be embodied in the 
hard-coded behaviour of agents.

Received on Monday, 11 October 2004 14:23:22 UTC