Re: Working without being ambushed by Ambiguity

On 24/06/2013 22:01, Larry Masinter wrote:
> The semantic web equivalent, I'm thinking, is removing the assumption of trust. But I think there is problem with the word 'trust'
>
>> In many cases, I think that trust is implied by the context of use, and that
>> this corresponds to the "99% of the time" that I can ignore trust
>
> How much of what you read on the internet do you believe without reservation? If the answer is 99%, you're extremely gullible and in danger. If it's 50%, I think you're still in trouble.

I don't see this as a counter to my "99%" - the key here is in the "context of 
use".  I don't expect to blindly use 99% of what's out there, it's just that by 
the time it gets to an application I write, the decision to use (i.e. trust) 
that data has already been made.

>
> So maybe we're using 'trust' in different ways.
>
> Consider defining trust in terms of transfer of belief. Party A trusts party B to the extent that if B utters statement S and A receives S, that A's belief state changes to include S. If A trust B perfectly, then A believes everything B says. If A doesn't trust B at all, then A ignores what B says, or doesn't believe it, in any case.
>
> That is,  "trust" is the factor that determines how utterances changes belief.  Trust is individual, dependent on the origin (if A trusts B and C trusts B, the trust of A for B and of C for B are properties of A and C and not B), never total (no one really trusts someone else completely, people don't even trust their memory), rarely zero (usually someone's statements affect your belief). Trust might be negative (the fact that B says S leads A to believe not-S).

It's true, we are using the word "trust" in different ways.  In my 3-year 
involvement with the EU iTrust working group [1][2], I saw many different 
notions of trust described, but never that one.  There was no total consensus 
about what trust actually meant, but many participants used the term in the 
sense of using trust as an indicator of how they expect some other party to 
behave, or how reliable they regard their pronouncements, in the absence of 
complete knowledge.  The practical work on trust tended to tie in quite closely 
with risk analysis.

[1] http://www.ninebynine.org/iTrust/iTrust-survey.html

[2] http://www.ninebynine.org/iTrust/iTrustSurvey.pdf

[[
Common themes: – Subjective
Defining trust
• 23 different definitions found
– Two economics papers used the same definition!
– Expectation or belief about another’s behaviour – Related to specific context
– Risk of trusting behaviour
– Basis for decision with incomplete information – Based on past evidence
]] -- (from [2])

>
> Perhaps you have a word you'd rather use than 'trust'.

No, I don't have a better word to offer.  But as far as I can see, it trust 
comes down to decisions one makes in light of expectations of other parties' 
behaviour.  So in that respect, it involves belief (though not necessarily how 
utterances change belief).

> For a semantic system to be world wide, it needs to function resiliently so that in the face of incorrect, false, malicious, sloppy, negligent, lazy sources.

Yes indeed.  But I see a separation of concerns here:  eventually, maybe our 
"semantic web" (the machine-articulated bits) will do all of this for us, but 
IMO that's a long way off, if indeed it's ever completely achievable.  In the 
meanwhile, people are still in the loop, making trusting decisions and resolving 
ambiguities.

>
> If you think you can close off ambiguity and trust and just assume them, then I think it's certain you're not building a web-scale system. Sure there are contexts where you can assume trust by the context, but those models don't scale.

I don't think that's (i.e. "close off ambiguity and trust and just assume them") 
what I've suggested.  Indeed, my original response explicitly suggested this 
wasn't good enough ("But I fear if we don't build on sound foundations
then sooner or later things will start to crumble.").

What I'm looking for here is a framework which provides incremental advance from 
the current situation (which corresponds to my "99% of the time") to one in 
which we have some basic tools to represent and process information that may be 
subject to differing interpretation or acceptance (due to trust, ambiguity or 
whatever).  In my view, some way to contextualize (otherwise processable) 
information is key to this (e.g. being able to explicitly label some claims as 
being based on a trusting decision).

#g
--

Received on Tuesday, 25 June 2013 09:57:17 UTC