credibility networks (was Re: Is Alice, or her post, credible? (A really rough use case for credibility signals.))

It seems to me we can unify these views using credibility networks. We 
can let anybody say anything about anything, as long as we only 
propagate that content only along credibility network links. I'll 
simplify a bit here, saying a "good" source is one which should be 
believed or one which has interesting and non-harmful content.

So let me see content from sources I've personally assessed as "good", 
and also from sources my software predicts will be "good". If I say 
Clarence is good, and Clarence says Darcy is good, and Darcy says Edward 
is good, then show me Edward's content, sure.

On the other hand, if there is no one in my network vouching for Edward 
in any way, I'm not going to see his content. Essentially, total 
strangers -- people with whom I have no positive connection, direct or 
indirect -- are blocked by default. I'm talking here about content 
appearing in search results, news feeds, comments, annotations, etc.  If 
I ask for something specifically by URL, that's a different matter. 
Whoever gave me that URL is essentially vouching for the content. If 
they give a link to bad content, I can push back.

This general approach subsumes the trust-the-elites model. If someone 
only says they trust, then they'll get an old-media/elite 
view of the available content.  If they only say they trust, they'll get a very different view.

My hope is most people have an assortment of sources they find credible 
and the software can help them flag where the sources disagree.

(This is what I was prototyping in trustlamp. Many details remain to be 

     -- Sandro

On 8/17/21 8:46 PM, Annette Greiner wrote:
> I don’t think I have the solution, but I offered my comment to help 
> better define what would be a reasonable solution. Another way to 
> think about it is that the signal should not be game-able. As for what 
> you refer to as “elites” and “hierarchies”,  I have no problem with 
> harnessing expertise to fight misinformation. Turning up the volume 
> does not improve the signal/noise ratio.
> -Annette
>> On Aug 17, 2021, at 2:44 PM, Bob Wyman < 
>> <>> wrote:
>> On Tue, Aug 17, 2021 at 4:37 PM Annette Greiner < 
>> <>> wrote:
>>     I don’t think this is a wise approach at all.
>> Can you propose an alternative that does not simply formalize the 
>> status of existing elites and thus strengthen hierarchies in public 
>> discourse? For instance, the existing Credibility Signals 
>> <> (date-first-archived, 
>> awards-won, ..) would seem to provide useful information about only a 
>> tiny portion of the many speakers on the Web. By focusing on the 
>> output of awards-granting organizations, while not providing signals 
>> usable by others, they empower that one group of speakers (those who 
>> grant awards) over the rest of us. Can you propose a mechanism that 
>> allows my voice, or yours, to have some influence in establishing 
>> credibility?
>>     We are seeing now that fraudsters and misinformation dealers are
>>     able to gain traction because there is so little barrier to their
>>     reaching high numbers of readers.
>> Today, the "bad" folk are able to speak without fear of rebuttal. 
>> Neither the fact-checking organizations nor the platforms for speech 
>> seem to have either the resources needed, or the motivation required, 
>> to usefully remark on the credibility of more than an infinitesimal 
>> portion of public speech. How can we possibly counterbalance the 
>> bad-speakers without enabling others to rebut their statements?
>> In any case, the methods I sketched concerning Alice's statements 
>> would empower formal fact checkers as well as individuals, For 
>> instance, a "climate fact-checking" organization would be able to do 
>> a Google search for "hydrogen 'only water-vapor 
>> <>'," 
>> and then, after minimal checking, annotate each of the hundreds of 
>> such statements with a common, well formed rebuttal that would be 
>> easily accessed by readers. Organizations could also set up 
>> prospective searches, such as a Google Alert, that would notify them 
>> of new instances of false claims and enable rapid response to their 
>> proliferation. I think this would be useful. Do you disagree?
>>     Any real solution must not make it just as easy to spread
>>     misinformation as good information.
>> I have rarely seen a method for preventing bad things that doesn't 
>> also prevent some good. The reality is that the most useful response 
>> to bad speech is more speech. Given more speech, we can discover 
>> methods to assist in the process of separating the good from the bad. 
>> But, if we don't provide the means to make alternative claims, there 
>> is little we can do with the resulting silence. False claims will 
>> stand if not rebutted.
>>     It must yield a signal with much much less noise than the
>>     currently available signals.
>> What "currently available signals?" Other than platform provided 
>> moderation and censorship, what is there?
>>     Increasing the level of he-said/she-said doesn’t help determine
>>     what is reliable information. Adding to the massive amounts of
>>     junk is not the answer.
>>     -Annette
>>>     On Aug 16, 2021, at 11:52 AM, Bob Wyman <
>>>     <>> wrote:
>>>     The thrust of my post is that we should dramatically enlarge the
>>>     universe of those who make such claims to include all users of
>>>     the Internet. The result of enabling every user of the Web to
>>>     produce and discover credibility signals will be massive amounts
>>>     of junk, but also a great many signals that you'll be able to
>>>     use to filter, analyze, and reason about claims and the subjects
>>>     of claims.

Received on Wednesday, 18 August 2021 02:53:08 UTC