Re: Trust in statements (was BioRDF Brainstorming)

On 13/02/2008, Matt Williams <matthew.williams@cancer.org.uk> wrote:
> Just a quick note that the 'trust' we place in an agent /could/ be
> described probabilistically, but could also be described logically. I'm
> assuming that the probabilities that the trust annotations are likely to
> subjective probabilities (as we're unlikely to have enough data to
> generate objective probabilities for the degree of trust).

You can logically describe discrete probabilities, you just have to
think of them as being an ordered set of values. Not sure whether
classic DL logic goes well with continuous probabilities.

> If you ask people to annotate with probabilities, the next thing you
> might want to do is to define a set of common probabilities (10 - 90, in
> 10% increments, for example).
>
> The alternative is that one could annotate a source, or agent, with our
> degree of belief, chosen from some dictionary of options (probable,
> possible, doubtful, implausible, etc.).

A system which uses numerical percentages is not optimal I guess, but
a well ordered set of degrees of belief would be good.

> Although there are some formal differences, the two approaches end up as
> something very similar. There is of course a great deal of work on
> managing conflicting annotations and levels of belief in the literature.

Could you direct me to a few sources for managing conflicts and levels
of belief? Do they refer to managing levels of conflict with
distributed annotations, ie, where the annotations are not all on a
central server where a curator picks one out and says it is the best,
or manually fixes ontology conflicts.

Peter

Received on Wednesday, 13 February 2008 01:11:10 UTC