Re: Digital Press Passes and Decentralized Public Key Infrastructures

+++1 to what Daniel says with the exception that more data points are
inherently better than fewer data points. I'm not sure what the right
answer is, but if we can avoid making this simply an exercise of pushing
all the responsibility and liability onto the verifer we will have done a
good thing. EULAs are a great example of pushing a whole lot of information
that 'protects' the author and puts the responsibility on the consumer to
interpret it. In practical terms it ends up being a lot of noise that may
benefit the author but certainly doesn't benefit the consumer. I think
there is a middle ground we could find that would better balance the needs
of the participants.

On Fri, Jul 23, 2021, 2:42 PM Daniel Schwabe <dschwabe@gmail.com> wrote:

> Hi all,
> jumping in on the discussion, I generally agree with Bob Wyman’s point,
> and Alan Karp’s remark.
>
> As I argued in one of our calls, I think credibility signals should be
> used to allow some consumer of information to ultimately convert these
> signals into a belief that will *enable some action. *This is what I
> understand that Adam means when exemplifying: “depositing money in my
> account”, “allowing my sister to take care of my children” - or “retweeting
> some information”, “lending money to someone”, “storm the Capitol”, etc...
> The action being considered typically includes parameters, e.g., : the
> amount of money  I’ll deposit"; how long will I leave my children with my
> sister”, “how much money will I lend”. The degree of credibility I will
> require about the information I will use to take these actions is different
> in each case -  e,g, I may require little background info to lend $10, but
> a lot more to lend $1000.
> The action being considered may not necessarily be temporally close to the
> moment I consume the information, but the credibility signals are really
> only relevant when I take (or not) the action. In other words, these
> signals remain “latent” until that moment.
>
> In many cases, I will attribute a (high) credibility score for
> people/organizations with which I share moral/ethical/personal values,
> often in contradiction to “facts” - i.e, claims made by some sources that I
> would otherwise accept as true, but I don’t because they contradict my
> personal values. There are empirical evidences that people operate this way.
>
> So I don’t think we need an infrastructure that characterizes trust by
> context for each possible context, as I understood Dave Karger puts it, and
> is clearly impractical, but more that we should surface as many indicators
> as possible (including meta-data such as provenance chains) that would feed
> into each person’s decision process (ie, applying one’s own trust policies)
> in order to take some action *based* on that information. This way, each
> person builds their own “trust chains” that are aligned with their values.
>
> Cheers
> Daniel
>
>
>
> On 22 Jul 2021, at 17:55, Alan Karp <alanhkarp@gmail.com> wrote:
>
> Trust is contextual.  I trust my bank with my money but not my children.
> I trust my sister with my children but not my money.
>
> --------------
> Alan Karp
>
>
> On Thu, Jul 22, 2021 at 1:47 PM Bob Wyman <bob@wyman.us> wrote:
>
>> Annette,
>> You wrote: "A list of who’s trusted and who isn’t would need to include
>> who is trusted _in_what_context_."
>> This reminded me of a recent discussion on StackExchange of "How is it
>> possible that [insert known crackpot] has articles published in
>> Peer-Reviewed Journals?
>> <https://academia.stackexchange.com/questions/170795/how-is-it-possible-that-insert-known-crackpot-has-articles-published-in-peer-r>
>> "
>> Of course, the response provided by many was that we shouldn't be
>> surprised when someone is an expert in one context but a complete crackpot
>> in others. (A classic example might be Hollywood actors who are often asked
>> to expound on world affairs... Who imagines that that might be useful?)
>>
>> The reality is that we can't ever say with confidence that "X is
>> credible," rather, the best we could ever say is that "When X speaks about
>> Y, X should probably be considered credible" and even then, we'd need to be
>> careful to specify the time period during which we should ascribe
>> credibility. As Buffy
>> <https://academia.stackexchange.com/users/75368/buffy> commented on
>> StackExchange: "someone who has done important work early on [in
>> their career] can become a crank later in life." And, we should consider
>> the "stopped clock" syndrome mentioned by Graham
>> <https://academia.stackexchange.com/users/43789/graham>: Some statements
>> may have been very credible at the moment that they were made even though
>> later evidence or paradigm shifts made them less credible. (Should one be
>> considered "credible" if what they said was once credible but now is no
>> longer credible?)
>>
>> bob wyman
>>
>> On Thu, Jul 22, 2021 at 3:49 PM Annette Greiner <amgreiner@lbl.gov>
>> wrote:
>>
>>> One important angle on this question is the context of a statement. A
>>> list of who’s trusted and who isn’t would need to include who is trusted
>>> _in_what_context_. For example, a physician who specializes in dermatology
>>> cannot prima facia be taken as an authority on heart transplants, nor vice
>>> versa. Part of the misinformation landscape we’ve seen of late is
>>> characterized by people getting credit for roles in which they have no
>>> expertise because they have credit in some other high-profile role. It
>>> would be a serious error on our part to develop a mechanism of people
>>> generating lists of those who they consider trustworthy without reference
>>> to context.
>>> -Annette
>>>
>>> On Jul 21, 2021, at 9:21 PM, Bob Wyman <bob@wyman.us> wrote:
>>>
>>> The best answer to the question "Who decides who is in and who is out?"
>>> is probably "Who cares? Do whatever feels good." The important thing in
>>> building a curated list is to simply build it.
>>>
>>>
>>>
>

Received on Friday, 23 July 2021 22:24:25 UTC