Re: Could the Wisdom of Crowds Help Fix Social Media’s Trust Problem?

 From my perspective, the two most important points in this article 
<https://www.wired.com/story/could-wisdom-of-crowds-help-fix-social-media-trust-problem/> 
are that:

 1. Viewpoint bias should be balanced among the reviewers, and
 2. the rating scale should be continuous.

Among the points Surowiecki has made 
<https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds#Five_elements_required_to_form_a_wise_crowd> 
about crowd wisdom is that each person's decisions must be made 
independently of anyone else's, which is difficult to imagine for issues 
that are important.

He has also asserted that expertise is "spectacularly narrow".  
https://ambur.net/crowdwisdom.pdf

If anyone knows of a plan addressing those requirements, I'll be happy 
to render it in StratML format.

Owen

On 9/5/2021 3:38 PM, Bob Wyman wrote:
>
>     "The paper’s main conclusion is straightforward: Social media
>     platforms like Facebook and Twitter could use a crowd-based system
>     to dramatically and cheaply scale up their fact-checking
>     operations without sacrificing accuracy. (Thelaypeople in the
>     study were paid $9 per hour, which translated to a cost of about
>     $.90 per article.) 
>
> Given this unsurprising result, it seems to me that social media 
> platforms' content assessment efforts would benefit greatly if it were 
> possible for any layperson to associate with online content an 
> annotation containing credibility signals or a more in-depth 
> ClaimReview-like assessment. Of course, if such a capability existed, 
> social media platforms could get access to the laypersons' 
> assessments for free, rather than spending $9/each...
>
> The way this might work would be to have a panel of trusted 
> professional content assessors who were directed to do detailed, 
> careful assessments of content that the lay community was flagging as 
> controversial. Assuming that the professional's assessments were 
> "trusted," the assessments of individuals would then be scored 
> according to their tendency to agree with the professionals. In cases 
> where professionals didn't have the bandwidth to assess content, but 
> there were layperson assessments created by those who had a 
> track-record of high agreement with the professionals, the layperson 
> assessments might be at least temporarily trusted without prior 
> confirmation by a professional reassessment. In this way, the scope of 
> material which was assessed could be increased to a scale much larger 
> than could be reasonably handled by the professional fact checkers 
> working on their own. Of course, one nice thing about such a process 
> is that it might lead to publishing "scores" for at least some of the 
> lay assessors and those scores could be published as 
> Credibility SIgnals. High-scoring lay-assessors might then monetize 
> their reputations by using them to obtain employment as paid, 
> professional assessors... There is a whole eco-system of impacts that 
> could arise from enabling broad community-based assessment of claims 
> and credibility.
>
> bob wyman
>
>
> On Sun, Sep 5, 2021 at 2:53 PM Daniel Schwabe <dschwabe@gmail.com 
> <mailto:dschwabe@gmail.com>> wrote:
>
>     Hi all,
>     Just came across this is an interesting study, at least partially
>     relevant to our discussion -
>     https://www.wired.com/story/could-wisdom-of-crowds-help-fix-social-media-trust-problem/
>     <https://www.wired.com/story/could-wisdom-of-crowds-help-fix-social-media-trust-problem/>.
>
>     It seems to agree with a study we did some years back, where we
>     showed that a non-expert crowd could match experts in assessing
>     fraudulent behavior in online marketplaces.
>
>     Cheers
>     D
>

Received on Saturday, 18 September 2021 03:42:11 UTC