- From: Phil Dawes <pdawes@users.sf.net>
- Date: Mon, 11 Oct 2004 10:39:26 +0000
- To: <Patrick.Stickler@nokia.com>
- Cc: <www-rdf-interest@w3.org>
Hi Patrick, Patrick.Stickler@nokia.com writes: > [...] > > I don't see where a human is essential in most cases. > > E.g., if some agent encounters a completely new term > > http://example.com/foo/bar/bas > > and it asks > > MGET /foo/bar/bas > Host: example.com > > and in the CBD provided it finds the statement > > http://example.com/foo/bar/bas > rdf:subPropertyOf > dc:title . > > and it knows how to interpret the dc:title property, > then it should be acceptable to treat any values of > > http://example.com/foo/bar/bas > > exactly the same as any values of dc:title, and the agent > is then able to do something useful with the knowledge it > has encountered, even though at first it did not understand > all the terms used to express that knowledge. > > Now, exactly where does context, or human intervention, come > into play? > Agreed, this is cool :-). However I'm not sure that dc:title, rdfs:label, comment etc.. are good litmus tests for this, since the functionality applied by an agent is usually to render the value in some way for a human to read. This allows for a wide scope of error in the semantic agreement (e.g. the property value might not be a commonly expected/accepted label for the resource, but that doesnt matter much because a human gets to interpret it). > Can you provide an explicit example, use case, whatever which > illustrates the kind of problems you are seeing? > > True, there may be "local" meaning and usage associated with > the term > > http://example.com/foo/bar/bas > > which some arbitrary agent may not be able to take full > advantage of -- and fully dynamic interaction between > arbitrary semantic web agents will depend upon a certain > number of commonly used "interlingua" vocabularies to > which proprietary, local vocabularies are related, and > interchange of meaning between arbitrary agents will not > always be absolute. > > But I see alot of opportunity for useful, dynamic knowledge > discovery that will facilitate alot of useful, accurate > behavior by arbitrary agents. > > I don't see how context is an inherent element in all such > use cases, even if it may be significant in some; and even if > context is significant, that doesn't mean that no useful utility > can be provided by a context-free knowledge access mechanism. > Yes - Having given it more thought, I'm starting to come round to this (sort of) - see below. > > Of course this doesn't prohibit the decentralisation of such > > context-management work - e.g. a third party could recommend a > > particular ontological mapping of terms based on an agreed context. I > > just don't see machines being able to do this work on an ad-hoc basis > > any time soon. > > > > You've been doing a lot of work on trust/context etc.. in addition to > > URIQA, so I'd be interested to hear your views on this. > > I see this "context-management" issue, insofar as I understand > what you mean, to be similar to trust issues in that one is seeking > to qualify knowledge in some way. > Exactly. > E.g., one can use the existing RDF machinery to define contexts > (application scope) and associate individual terms and/or entire > vocabularies of terms with particular contexts, such that an agent > is presumed to only give regard to assertions employing those terms > when operating within that context. One could also reify assertions > and qualify the statements for context. Fair enough. > Yes, although that sounds complicated. Maybe too complicated to be deployable by the masses.(?) > However, note that the context is conveyed in the RDF statements about > the term, vocabulary, assertion, whatever -- and therefore, URIQA *can* > provide contextual information necessary for fully interpreting a > given term -- insofar as the authoritative description of that term > provides that contextual information. It thus becomes a best-practice > issue, for those publishing authoritative descriptions of terms, to > include such contextual information which may be relevant to agents. > > So, in a way, I see such "problems" as an issue of methodology, not > as a shortcoming of the URIQA protocol. And I also see the fact that > URIQA does not explicitly address contextualization to reflect a proper > and efficient division of functional layers in the application stack. > > In any case, perhaps you could ellaborate on the particular problems > you are envisioning with the publication/interchange of context-independent > definitions of terms. Maybe I'm missing something. Or maybe I can then > offer some explicit examples of how URIQA can help solve such problems, in > conjunction with some best practices. Having started writing some examples, I quickly realized that the problem in all cases was lack of precision (or rather implicit context assumptions) in the original shared ontology. As you say, this is a best-practice problem rather than a technical problem with URIQA. E.g. somebody mints the term 'foo:Server', describing it as a computer server. In actual fact the implicit context in which the term is used (by software) means that it is also a unix server, a server owned by the company, a server managed by a particular team, a server used to host databases etc.. These contextual clarifications are easy to identify in retrospect, but aren't always obvious to the author, and more importantly aren't at all obvious to others attempting to use the terms. (note that often the reason for using a term is so that it will be interpreted in some way by a software service, meaning that the user is more interested in the software's implicit definition of the term rather than the original author's). The problem is then compounded by ontological links between terms (e.g. bah:myprop rdfs:range foo:Server), and it was the automatic URIQA lookup and utilisation of these terms that I was concerned about. Having said that, the fact that my examples relied on imprecision in the shared ontologies is heartening to me, since that implies that if people restrict their 'trust' of ontological statements to those directly referencing a 'high quality' (i.e. precise and well understood/deployed) schema, there is a heightened chance of being able to usefully utilise these new statements. It might even be possible for the agent to present some metric of precision to the user by counting the number of levels away from a 'high quality' schema that a term is defined. BTW, I've made the assumption that URIQA would be mostly used to lookup ontology terms. This is because (in my environment), URIs don't tend to fit with instance data in an obvious way, unless they refer to e.g. web based documents or email addresses. I think this is also the case with FOAF and DOAP, and would be interested in your views on this. BTW2, apologies for the sensational subject. It made me cringe when I read it back the next day. Cheers, Phil
Received on Monday, 11 October 2004 09:39:44 UTC