W3C home > Mailing lists > Public > public-semweb-lifesci@w3.org > February 2008

Re: Trust in statements (was BioRDF Brainstorming)

From: Adrian Walker <adriandwalker@gmail.com>
Date: Tue, 12 Feb 2008 20:03:12 -0500
Message-ID: <1e89d6a40802121703q26cf6b85m35cd035aa708403a@mail.gmail.com>
To: "Matt Williams" <matthew.williams@cancer.org.uk>
Cc: public-semweb-lifesci@w3.org, holger.stenzhorn@deri.org, p.roe@qut.edu.au, j.hogan@qut.edu.au
Hi Matt --

Another way of increasing the amount of trust is to provide explanations, in
English, automatically derived from the proofs that an agent carries out.

A serendipitous feature is that the explanations start out with headlines,
and then go progressively into finer details.  This aspect is particularly
useful for explanations over RDF,  where there may be many more steps than
for similar deductions over n-ary predicates, n >> 3 .  The example [1]
illustrates this.

The explanations can be automatically hypertexted for ease of navigation.

The explanations can also be abductive, for cases where an expected answer
is not found.

This is what is done in the system online at the site below.

                                                        -- Adrian

[1]  www.reengineeringllc.com/demo_agents/RDFQueryLangComparison1.agent

Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English
Online at www.reengineeringllc.com    Shared use is free

Adrian Walker
Reengineering


On 2/12/08, Matt Williams <matthew.williams@cancer.org.uk> wrote:
>
>
> Just a quick note that the 'trust' we place in an agent /could/ be
> described probabilistically, but could also be described logically. I'm
> assuming that the probabilities that the trust annotations are likely to
> subjective probabilities (as we're unlikely to have enough data to
> generate objective probabilities for the degree of trust).
>
> If you ask people to annotate with probabilities, the next thing you
> might want to do is to define a set of common probabilities (10 - 90, in
> 10% increments, for example).
>
> The alternative is that one could annotate a source, or agent, with our
> degree of belief, chosen from some dictionary of options (probable,
> possible, doubtful, implausible, etc.).
>
> Although there are some formal differences, the two approaches end up as
> something very similar. There is of course a great deal of work on
> managing conflicting annotations and levels of belief in the literature.
>
> Matt
>
> --
> http://acl.icnet.uk/~mw
> http://adhominem.blogsome.com/
> +44 (0)7834 899570
>
>
Received on Wednesday, 13 February 2008 01:03:32 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:00:51 GMT