Re: credibility networks (was Re: Is Alice, or her post, credible? (A really rough use case for credibility signals.))

*With all due fear of taking over a thread to do something that seems
uncomfortably close to campaigning...*

In the years since I first met Sandro and then joined this group, I've seen
what happened in this email thread happen a number of times. I've also seen
it happen in CredCo threads, MisinfoCon discussions, and other groups.

What happens is essentially this: Someone proposes a new thing, let's call
it a MacGuffin <https://en.wikipedia.org/wiki/MacGuffin>. The person who
proposes this MacGuffin explains it in great detail, yet is a bit hand-wavy
on some of the aspects, especially those about how it will be supported, or
how it would get adoption. That doesn't bother me at all. If we didn't have
people dreaming up new things we wouldn't have anything new. But the
reality is that these are just ideas, not actual initiatives.

Then in Act II there is some discussion about the MacGuffin, talking about
the pros and the cons, etc.

Then there is a pause. This pause comes because the group is made up of
people who all have full-time jobs. People who have jobs can't just drop
everything and put in the work needed to launch a new MacGuffin. In some
cases they can to some degree, which is what Sandro has done with
TrustLamp. He would be the first to tell you, I think, how hard that is.

After the pause comes Act III, in which some people who are in the group
realize that much of the MacGuffin is a lot like what they are already
doing, and so they promote their own thing. Greg, you played that part this
time. I've played that part many times in the past.

After that, the play is over, everyone goes home. And then after a while it
happens again.

My suggestion (and the reason that I'm running for the chair) is that we
reverse the order of this play. Rather than: 1. Idea, 2. Evaluation, 3.
Look at related existing initiatives. We do this: 1. Look at existing
initiatives, 2. Evaluate them, 3. (with luck) Propose new ideas that fill
an existing and yet un-served need.

To evaluate the current initiatives effectively, I would propose that first
we come up with some guidelines. To do that, we start with the documents we
have and that we have all agreed to. Then we turn those into a framework
for evaluation. Once we agree to that framework, we publish it. That gives
this group the relatively quick win of publishing something that can be
used by anyone as they are looking at existing initiatives, or are thinking
about starting something new. It essentially puts this group in the middle
of many conversations happening about disinformation. It will help everyone
to clarify what can actually help, whom it helps, how much it helps, and
how much downside there may be.

Once we have that document, we can then decide if we want to meet regularly
and evaluate initiatives based on that document, or create a new group to
do that, or examine the whole landscape and figure out if there's something
that would be appropriate for this W3C group to try to do next.

I say all this not to say that the original idea doesn't have merit, or
that any idea discussed here (including my own trust.txt) is great or
sucks. I'm just saying it would help the world, and each of us
individually, if we could evaluate ideas based on a common vocabulary.

Thank you for reading.

-Scott Yates
Founder
JournalList.net, caretaker of the trust.txt framework
202-742-6842
Short Video Explanation of trust.txt <https://youtu.be/lunOBapQxpU>


On Wed, Aug 18, 2021 at 11:19 AM Greg Mcverry <jgregmcverry@gmail.com>
wrote:

> This document was also discussed at the verifiable credentials meeting
> this week:
> https://docs.google.com/presentation/d/1jn9DjM-wlZT1B9moBP23qhiB2FZc_H8oqXIZN222a9U/edit#slide=id.ge4a5a0fed4_0_18
>
> I know there is a lot of crossover to the verified credential group here
> but I think if we are developing trust signals directed at the author we
> should develop our spec to align with VC.
>
> On Wed, Aug 18, 2021 at 12:10 PM Greg Mcverry <jgregmcverry@gmail.com>
> wrote:
>
>> We have been playing with the concept of vouch over in the indieweb
>> world: https://indieweb.org/Vouch
>>
>> Different stack since based on webmentions but the workflow pretty much
>> the same.
>>
>> The goal is to create semi-private posts for community members vouched by
>> others and as a trust network.
>>
>> XFN pretty defunct but I use rel="muse" on my poetry follower list as a
>> trust signal
>>
>> https://indieweb.org/XFN
>>
>>
>>
>> On Wed, Aug 18, 2021 at 11:35 AM David Karger <karger@mit.edu> wrote:
>>
>>> We've been working for a few years on this kind of trust network.  I
>>> recognize the subject-dependence of trust, but I think that trying to work
>>> that into systems being developed now is too ambitious.  Right now the
>>> value of a trust network can be demonstrated more effectively by starting
>>> with a simpler system that works in terms of generic credibility rather
>>> than subject-specific.  What you want are people who know what they know
>>> and don't claim to know more.   Yes, you'll lose out on your friend who
>>> knows everything about global warming but is anti-vax, but I think there
>>> are enough generally trustworthy individuals to drive a network of
>>> assessments.
>>> On 8/18/2021 9:46 AM, connie im dialog wrote:
>>>
>>> As an additional thought, perhaps to bridge the exchange between Annette
>>> and Bob, and Sandro: one aspect that I see missing in the scenario below is
>>> the underlying knowledge/perspective framework or approach that ties
>>> signals together: could be understood as a schema or rubric.  This is a
>>> different way to tie signals together from trust networks, and is probably
>>> underlying those relationships.
>>>
>>> What I mean by this is: all of the signals proposed are meant to be
>>> understood as potential indications of credibility, but they only gain
>>> meaning when some of them brought together in a specific interpretive
>>> framework.  Implicit in the development of many of the current signals
>>> proposed is belief, or trust, in a scientific method of evidence and
>>> evaluation of claims using methods such as verifiability. It's also tied to
>>> things like expertise and the development of professions.
>>>
>>> This framework of knowledge is different than a moral order that trusts
>>> inherited wisdom, or tradition, for example.  (I'm going to sidestep the
>>> elites for now since the power dynamic depends on what kind of elite one
>>> is.) Just because they are different does mean that they can't in fact
>>> share one or more signals, but the dominance of certain signals over others
>>> I think varies.  And because we aren't always consistent, we may hold both
>>> of these or more frameworks given a certain context or topic.
>>>
>>> So I guess I see Bob's suggestion as much in the line of a number of
>>> crowdsourced wisdom projects, which can be valuable.  When you think of
>>> historical or even current examples, such as genocide reporting, it's very
>>> critical to include as many on-the-ground reports as possible, even as
>>> those claims also need to be validated as much as possible. In these
>>> contexts, there are many indications of what makes for credible witness
>>> reports which isn't the same as expertise.
>>>
>>> But in some cases, on some topics, you can't go with any crowd
>>> <https://wearecommons.us/crowd-wisdom-public-wisdom-regarding-misinformation-at-large/>.
>>> That is at least if you hold to for example a scientific method of
>>> evaluation and validation.  As with Annette, I have no problem with
>>> deferring to expertise understood in this framework, and think it's even
>>> worth being explicit about the theoretical framework: X claim works if you
>>> believe or agree with Y approach.
>>>
>>> My assumption in the cases of when something is complicated, or new to
>>> me is to agree with Sandro but to add on a little more: if he tells me
>>> someone is good at something, I'll likely think that someone is good, but
>>> what's driving this is trust from experience in his knowledge about certain
>>> things at certain times at certain topics (back to the framework or
>>> approach).
>>>
>>> Thoughts?
>>>
>>> One article that I recently came across seems related --  I just started
>>> working through it -- is "Beyond subjective and objective in statistics" by
>>> Andrew Gelman and Christian Hennig with a number of responses including by
>>> L.A. Paul so sharing in case of interest
>>> https://www.lapaul.org/papers/objectSubjectPerspectives.pdf
>>>
>>> --connie
>>>
>>> On Tue, Aug 17, 2021 at 10:53 PM Sandro Hawke <sandro@hawke.org> wrote:
>>>
>>>> It seems to me we can unify these views using credibility networks. We
>>>> can let anybody say anything about anything, as long as we only propagate
>>>> that content only along credibility network links. I'll simplify a bit
>>>> here, saying a "good" source is one which should be believed or one which
>>>> has interesting and non-harmful content.
>>>>
>>>> So let me see content from sources I've personally assessed as "good",
>>>> and also from sources my software predicts will be "good".  If I say
>>>> Clarence is good, and Clarence says Darcy is good, and Darcy says Edward is
>>>> good, then show me Edward's content, sure.
>>>>
>>>> On the other hand, if there is no one in my network vouching for Edward
>>>> in any way, I'm not going to see his content. Essentially, total strangers
>>>> -- people with whom I have no positive connection, direct or indirect --
>>>> are blocked by default. I'm talking here about content appearing in search
>>>> results, news feeds, comments, annotations, etc.  If I ask for something
>>>> specifically by URL, that's a different matter. Whoever gave me that URL is
>>>> essentially vouching for the content. If they give a link to bad content, I
>>>> can push back.
>>>>
>>>> This general approach subsumes the trust-the-elites model. If someone
>>>> only says they trust pulitzer.org, then they'll get an old-media/elite
>>>> view of the available content.  If they only say they trust
>>>> infowars.com, they'll get a very different view.
>>>>
>>>> My hope is most people have an assortment of sources they find credible
>>>> and the software can help them flag where the sources disagree.
>>>>
>>>> (This is what I was prototyping in trustlamp. Many details remain to be
>>>> solved.)
>>>>
>>>>     -- Sandro
>>>>
>>>>
>>>>
>>>> On 8/17/21 8:46 PM, Annette Greiner wrote:
>>>>
>>>> I don’t think I have the solution, but I offered my comment to help
>>>> better define what would be a reasonable solution. Another way to think
>>>> about it is that the signal should not be game-able. As for what you refer
>>>> to as “elites” and “hierarchies”,  I have no problem with harnessing
>>>> expertise to fight misinformation. Turning up the volume does not improve
>>>> the signal/noise ratio.
>>>> -Annette
>>>>
>>>> On Aug 17, 2021, at 2:44 PM, Bob Wyman <bob@wyman.us> wrote:
>>>>
>>>> On Tue, Aug 17, 2021 at 4:37 PM Annette Greiner <amgreiner@lbl.gov>
>>>> wrote:
>>>>
>>>>> I don’t think this is a wise approach at all.
>>>>>
>>>> Can you propose an alternative that does not simply formalize the
>>>> status of existing elites and thus strengthen hierarchies in public
>>>> discourse? For instance, the existing Credibility Signals
>>>> <https://credweb.org/reviewed-signals/> (date-first-archived,
>>>> awards-won, ..) would seem to provide useful information about only a tiny
>>>> portion of the many speakers on the Web. By focusing on the output of
>>>> awards-granting organizations, while not providing signals usable by
>>>> others, they empower that one group of speakers (those who grant awards)
>>>> over the rest of us. Can you propose a mechanism that allows my voice, or
>>>> yours, to have some influence in establishing credibility?
>>>>
>>>> We are seeing now that fraudsters and misinformation dealers are able
>>>>> to gain traction because there is so little barrier to their reaching high
>>>>> numbers of readers.
>>>>>
>>>> Today, the "bad" folk are able to speak without fear of rebuttal.
>>>> Neither the fact-checking organizations nor the platforms for speech seem
>>>> to have either the resources needed, or the motivation required, to
>>>> usefully remark on the credibility of more than an infinitesimal portion of
>>>> public speech. How can we possibly counterbalance the bad-speakers without
>>>> enabling others to rebut their statements?
>>>>
>>>> In any case, the methods I sketched concerning Alice's statements would
>>>> empower formal fact checkers as well as individuals, For instance, a
>>>> "climate fact-checking" organization would be able to do a Google search
>>>> for "hydrogen 'only water-vapor
>>>> <https://www.google.com/search?q=hydrogen+%22only+water-vapor%22>',"
>>>> and then, after minimal checking, annotate each of the hundreds of such
>>>> statements with a common, well formed rebuttal that would be easily
>>>> accessed by readers. Organizations could also set up prospective searches,
>>>> such as a Google Alert, that would notify them of new instances of false
>>>> claims and enable rapid response to their proliferation. I think this would
>>>> be useful. Do you disagree?
>>>>
>>>> Any real solution must not make it just as easy to spread
>>>>> misinformation as good information.
>>>>>
>>>> I have rarely seen a method for preventing bad things that doesn't also
>>>> prevent some good. The reality is that the most useful response to bad
>>>> speech is more speech. Given more speech, we can discover methods to assist
>>>> in the process of separating the good from the bad. But, if we don't
>>>> provide the means to make alternative claims, there is little we can do
>>>> with the resulting silence. False claims will stand if not rebutted.
>>>>
>>>> It must yield a signal with much much less noise than the currently
>>>>> available signals.
>>>>>
>>>> What "currently available signals?" Other than platform provided
>>>> moderation and censorship, what is there?
>>>>
>>>> Increasing the level of he-said/she-said doesn’t help determine what is
>>>>> reliable information. Adding to the massive amounts of junk is not the
>>>>> answer.
>>>>> -Annette
>>>>>
>>>>> On Aug 16, 2021, at 11:52 AM, Bob Wyman <bob@wyman.us> wrote:
>>>>>
>>>>> The thrust of my post is that we should dramatically enlarge the
>>>>> universe of those who make such claims to include all users of the
>>>>> Internet. The result of enabling every user of the Web to produce and
>>>>> discover credibility signals will be massive amounts of junk, but also a
>>>>> great many signals that you'll be able to use to filter, analyze, and
>>>>> reason about claims and the subjects of claims.
>>>>>
>>>>>
>>>>
>>>>
>>>
>>> --
>>> connie moon sehat
>>> connieimdialog@gmail.com
>>> https://linkedin.com/in/connieatwork
>>> PGP Key ID: 0x95DFB60E
>>>
>>>
>>
>> --
>> J. Gregory McVerry, PhD
>> Assistant Professor
>> Southern Connecticut State University
>> twitter: jgmac1106
>>
>>
>>
>>
>
> --
> J. Gregory McVerry, PhD
> Assistant Professor
> Southern Connecticut State University
> twitter: jgmac1106
>
>
>
>

Received on Wednesday, 18 August 2021 22:20:52 UTC