RE: URIQA thwarted by context problems? (NOT)

> -----Original Message-----
> From: ext Phil Dawes [mailto:pdawes@users.sourceforge.net]
> Sent: 08 October, 2004 20:59
> To: Stickler Patrick (Nokia-TP-MSW/Tampere)
> Cc: www-rdf-interest@w3.org
> Subject: URIQA thwarted by context problems?
> 
> 
> Hi Patrick,
> 
> I'm afraid that the more work I do with rdf, the more I'm having
> problems seeing URIQA working as a mechanism for bootstrapping the
> semantic web.
> 
> The main problem I think is that when discovering new information,
> people are always required to sort out context (a point made by Uche
> Ogbuji on the rdf-interest list recently).

I'm not sure I fully agreed with Uche's point, insofar as I understood
it. But read on...

> When identifying new terms, some mechanism has to exist to decide
> whether the author's definition of the term fits with its use in the
> instance data, and that that tallies with the context in which the
> system is attempting to use the data. To my mind this prohibits a
> system 'discovering' a new term without a human vetoing and managing
> its use. 

I don't see where a human is essential in most cases.

E.g., if some agent encounters a completely new term

   http://example.com/foo/bar/bas

and it asks

   MGET /foo/bar/bas
   Host: example.com

and in the CBD provided it finds the statement

   http://example.com/foo/bar/bas
      rdf:subPropertyOf
         dc:title .

and it knows how to interpret the dc:title property,
then it should be acceptable to treat any values of

   http://example.com/foo/bar/bas

exactly the same as any values of dc:title, and the agent
is then able to do something useful with the knowledge it
has encountered, even though at first it did not understand
all the terms used to express that knowledge.

Now, exactly where does context, or human intervention, come 
into play?

Can you provide an explicit example, use case, whatever which
illustrates the kind of problems you are seeing?

True, there may be "local" meaning and usage associated with 
the term

   http://example.com/foo/bar/bas

which some arbitrary agent may not be able to take full
advantage of -- and fully dynamic interaction between
arbitrary semantic web agents will depend upon a certain
number of commonly used "interlingua" vocabularies to
which proprietary, local vocabularies are related, and
interchange of meaning between arbitrary agents will not 
always be absolute.

But I see alot of opportunity for useful, dynamic knowledge
discovery that will facilitate alot of useful, accurate
behavior by arbitrary agents.

I don't see how context is an inherent element in all such
use cases, even if it may be significant in some; and even if
context is significant, that doesn't mean that no useful utility
can be provided by a context-free knowledge access mechanism. 

> Of course this doesn't prohibit the decentralisation of such
> context-management work - e.g. a third party could recommend a
> particular ontological mapping of terms based on an agreed context. I
> just don't see machines being able to do this work on an ad-hoc basis
> any time soon.
> 
> You've been doing a lot of work on trust/context etc.. in addition to
> URIQA, so I'd be interested to hear your views on this.

I see this "context-management" issue, insofar as I understand
what you mean, to be similar to trust issues in that one is seeking 
to qualify knowledge in some way.

E.g., one can use the existing RDF machinery to define contexts
(application scope) and associate individual terms and/or entire
vocabularies of terms with particular contexts, such that an agent 
is presumed to only give regard to assertions employing those terms 
when operating within that context. One could also reify assertions
and qualify the statements for context. Fair enough. 

However, note that the context is conveyed in the RDF statements about
the term, vocabulary, assertion, whatever -- and therefore, URIQA *can*
provide contextual information necessary for fully interpreting a
given term -- insofar as the authoritative description of that term
provides that contextual information. It thus becomes a best-practice
issue, for those publishing authoritative descriptions of terms, to
include such contextual information which may be relevant to agents.

So, in a way, I see such "problems" as an issue of methodology, not
as a shortcoming of the URIQA protocol. And I also see the fact that
URIQA does not explicitly address contextualization to reflect a proper
and efficient division of functional layers in the application stack.
 
In any case, perhaps you could ellaborate on the particular problems 
you are envisioning with the publication/interchange of context-independent
definitions of terms. Maybe I'm missing something. Or maybe I can then
offer some explicit examples of how URIQA can help solve such problems, in
conjunction with some best practices.

Cheers,

Patrick

Received on Sunday, 10 October 2004 07:20:45 UTC