W3C home > Mailing lists > Public > www-rdf-interest@w3.org > October 2004

RE: URIQA thwarted by context problems? (NOT)

From: <Patrick.Stickler@nokia.com>
Date: Mon, 11 Oct 2004 13:07:21 +0300
Message-ID: <1E4A0AC134884349A21955574A90A7A50ADD2A@trebe051.ntc.nokia.com>
To: <pdawes@users.sourceforge.net>
Cc: <www-rdf-interest@w3.org>



> -----Original Message-----
> From: ext Phil Dawes [mailto:pdawes@users.sourceforge.net]
> Sent: 11 October, 2004 13:39
> To: Stickler Patrick (Nokia-TP-MSW/Tampere)
> Cc: www-rdf-interest@w3.org
> Subject: RE: URIQA thwarted by context problems? (NOT)
> 
> 
> Hi Patrick,
> 
> Patrick.Stickler@nokia.com writes:
>  >  [...]
>  > 
>  > I don't see where a human is essential in most cases.
>  > 
>  > E.g., if some agent encounters a completely new term
>  > 
>  >    http://example.com/foo/bar/bas
>  > 
>  > and it asks
>  > 
>  >    MGET /foo/bar/bas
>  >    Host: example.com
>  > 
>  > and in the CBD provided it finds the statement
>  > 
>  >    http://example.com/foo/bar/bas
>  >       rdf:subPropertyOf
>  >          dc:title .
>  > 
>  > and it knows how to interpret the dc:title property,
>  > then it should be acceptable to treat any values of
>  > 
>  >    http://example.com/foo/bar/bas
>  > 
>  > exactly the same as any values of dc:title, and the agent
>  > is then able to do something useful with the knowledge it
>  > has encountered, even though at first it did not understand
>  > all the terms used to express that knowledge.
>  > 
>  > Now, exactly where does context, or human intervention, come 
>  > into play?
>  > 
> 
> Agreed, this is cool :-).
> 
> However I'm not sure that dc:title, rdfs:label, comment etc.. are good
> litmus tests for this, since the functionality applied by an agent is
> usually to render the value in some way for a human to read. This
> allows for a wide scope of error in the semantic agreement (e.g. the
> property value might not be a commonly expected/accepted label for the
> resource, but that doesnt matter much because a human gets to
> interpret it).

The utility of any body of knowledge will be proportional to how closely
the agents communicating share full understanding of the terms used to
express that knowledge. Wherever there is a "dilution" of shared understanding,
there will be a loss of utility. That is unavoidable.

And yes, in some cases that may result in ackward (albeit technically
correct) behavior by some agents.

Those resonsible for the effectiveness of such agents will then need
to be dilligent, and when it is evident that their agent is 
frequently encountering a new vocabulary which, while a certain
degree of intersection with otherwise known/supported vocabularies
exist, the effectiveness of the agent could be improved by 
incorporating more specific native support for this new vocabulary.
 
>  > Can you provide an explicit example, use case, whatever which
>  > illustrates the kind of problems you are seeing?
>  > 
>  > True, there may be "local" meaning and usage associated with 
>  > the term
>  > 
>  >    http://example.com/foo/bar/bas
>  > 
>  > which some arbitrary agent may not be able to take full
>  > advantage of -- and fully dynamic interaction between
>  > arbitrary semantic web agents will depend upon a certain
>  > number of commonly used "interlingua" vocabularies to
>  > which proprietary, local vocabularies are related, and
>  > interchange of meaning between arbitrary agents will not 
>  > always be absolute.
>  > 
>  > But I see alot of opportunity for useful, dynamic knowledge
>  > discovery that will facilitate alot of useful, accurate
>  > behavior by arbitrary agents.
>  > 
>  > I don't see how context is an inherent element in all such
>  > use cases, even if it may be significant in some; and even if
>  > context is significant, that doesn't mean that no useful utility
>  > can be provided by a context-free knowledge access mechanism. 
>  > 
> 
> Yes - Having given it more thought, I'm starting to come round to this
> (sort of) - see below.
> 
>  > > Of course this doesn't prohibit the decentralisation of such
>  > > context-management work - e.g. a third party could recommend a
>  > > particular ontological mapping of terms based on an 
> agreed context. I
>  > > just don't see machines being able to do this work on an 
> ad-hoc basis
>  > > any time soon.
>  > > 
>  > > You've been doing a lot of work on trust/context etc.. 
> in addition to
>  > > URIQA, so I'd be interested to hear your views on this.
>  > 
>  > I see this "context-management" issue, insofar as I understand
>  > what you mean, to be similar to trust issues in that one 
> is seeking 
>  > to qualify knowledge in some way.
>  > 
> 
> Exactly.
> 
>  > E.g., one can use the existing RDF machinery to define contexts
>  > (application scope) and associate individual terms and/or entire
>  > vocabularies of terms with particular contexts, such that an agent 
>  > is presumed to only give regard to assertions employing 
> those terms 
>  > when operating within that context. One could also reify assertions
>  > and qualify the statements for context. Fair enough. 
>  > 
> 
> Yes, although that sounds complicated. Maybe too complicated to be
> deployable by the masses.(?)

To do so manually, yes. But inferring statement-specific qualifications,
capturing them using reification, based on qualifications of graphs,
could be a useful approach.

The whole issue of qualfication of assertions is an area needing
alot of long-term research and (especially) implementational
experience.

At the moment, reification is a reasonable way to do this (perhaps
the only way) and while some folks may not care for reification,
it cannot be excluded from a standardized form of description.

>  > However, note that the context is conveyed in the RDF 
> statements about
>  > the term, vocabulary, assertion, whatever -- and 
> therefore, URIQA *can*
>  > provide contextual information necessary for fully interpreting a
>  > given term -- insofar as the authoritative description of that term
>  > provides that contextual information. It thus becomes a 
> best-practice
>  > issue, for those publishing authoritative descriptions of terms, to
>  > include such contextual information which may be relevant 
> to agents.
>  > 
>  > So, in a way, I see such "problems" as an issue of methodology, not
>  > as a shortcoming of the URIQA protocol. And I also see the 
> fact that
>  > URIQA does not explicitly address contextualization to 
> reflect a proper
>  > and efficient division of functional layers in the 
> application stack.
>  >  
>  > In any case, perhaps you could ellaborate on the 
> particular problems 
>  > you are envisioning with the publication/interchange of 
> context-independent
>  > definitions of terms. Maybe I'm missing something. Or 
> maybe I can then
>  > offer some explicit examples of how URIQA can help solve 
> such problems, in
>  > conjunction with some best practices.
> 
> Having started writing some examples, I quickly realized that the
> problem in all cases was lack of precision (or rather implicit context
> assumptions) in the original shared ontology. As you say, this is a
> best-practice problem rather than a technical problem with URIQA.
> 
> E.g. somebody mints the term 'foo:Server', describing it as a computer
> server. In actual fact the implicit context in which the term is used
> (by software) means that it is also a unix server, a server owned by
> the company, a server managed by a particular team, a server used to
> host databases etc..

Right. And while having the more precise knowledge could help an agent
perform better, it could still do some useful things based on the
anemic, but correct, knowledge that the computer is a foo:Server.

> These contextual clarifications are easy to identify in retrospect,
> but aren't always obvious to the author, and more importantly aren't
> at all obvious to others attempting to use the terms. (note that often
> the reason for using a term is so that it will be interpreted in some
> way by a software service, meaning that the user is more interested in
> the software's implicit definition of the term rather than the
> original author's).
> The problem is then compounded by ontological links between terms
> (e.g. bah:myprop rdfs:range foo:Server), and it was the automatic
> URIQA lookup and utilisation of these terms that I was concerned
> about.
> 
> Having said that, the fact that my examples relied on imprecision in
> the shared ontologies is heartening to me, since that implies that if
> people restrict their 'trust' of ontological statements to those
> directly referencing a 'high quality' (i.e. precise and well
> understood/deployed) schema, there is a heightened chance of being
> able to usefully utilise these new statements. 
> It might even be possible for the agent to present some metric of
> precision to the user by counting the number of levels away from a
> 'high quality' schema that a term is defined.

Exactly. And also, it means that when folks realize that their 
descriptions are a bit on the thin side and could include some
additional meat, they can revise those descriptions, and such
socially motivated practices happen irrespective of the basic
functionality provided by URIQA for publishing those descriptions.

And also let us not forget about third party sources of knowledge.

While one of the goals , if not the key goal, of URIQA is to provide
a simple but effective protocol for obtaining authoritative descriptions,
it also defines the service interface for obtaining third party
descriptions -- and this opens the door for knowledge brokers to
offer pay-for services providing high quality, "meaty" descriptions 
of resources. And a given agent can define a level of trust for
particular knowledge portals, and employ a combination of authoritative
and third party sources to obtain the knowledge it needs, at a trust
level it is comfortable with.


> BTW, I've made the assumption that URIQA would be mostly used to
> lookup ontology terms. This is because (in my environment), URIs don't
> tend to fit with instance data in an obvious way, unless they refer to
> e.g. web based documents or email addresses. I think this is also the
> case with FOAF and DOAP, and would be interested in your views on
> this.

URIQA works the same, and equally well, for any resource, no matter
what kind it is. Choices about returning a CBD rather than some other
form of description certainly take into account the nature of descriptions
of certain types of resources, such as terms, but in general, URIQA
is agnostic about the type of resource described.

We use it to describe terms, vocabularies, schemas, documents, devices,
etc.


> BTW2, apologies for the sensational subject. It made me cringe when I
> read it back the next day.

Please don't apologise. These are very good questions and it has been
very beneficial to be able to cover them. I wish I had time to write
more about URIQA, particularly about rational and experience putting
it to work. Challenging questions are a good impetus to address alot
of these key issues.

Bring it on!  ;-)

Cheers,

Patrick
Received on Monday, 11 October 2004 10:17:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:10 GMT