- From: Bob Wyman <bob@wyman.us>
- Date: Sun, 5 Sep 2021 15:38:20 -0400
- To: Daniel Schwabe <dschwabe@gmail.com>
- Cc: Credible Web CG <public-credibility@w3.org>
- Message-ID: <CAA1s49U-MRhh8fJK+3Yd9_=AHE92HsEt7NydP0zobM260JJDhA@mail.gmail.com>
> > "The paper’s main conclusion is straightforward: Social media platforms > like Facebook and Twitter could use a crowd-based system to dramatically > and cheaply scale up their fact-checking operations without sacrificing > accuracy. (The laypeople in the study were paid $9 per hour, which > translated to a cost of about $.90 per article.) Given this unsurprising result, it seems to me that social media platforms' content assessment efforts would benefit greatly if it were possible for any layperson to associate with online content an annotation containing credibility signals or a more in-depth ClaimReview-like assessment. Of course, if such a capability existed, social media platforms could get access to the laypersons' assessments for free, rather than spending $9/each... The way this might work would be to have a panel of trusted professional content assessors who were directed to do detailed, careful assessments of content that the lay community was flagging as controversial. Assuming that the professional's assessments were "trusted," the assessments of individuals would then be scored according to their tendency to agree with the professionals. In cases where professionals didn't have the bandwidth to assess content, but there were layperson assessments created by those who had a track-record of high agreement with the professionals, the layperson assessments might be at least temporarily trusted without prior confirmation by a professional reassessment. In this way, the scope of material which was assessed could be increased to a scale much larger than could be reasonably handled by the professional fact checkers working on their own. Of course, one nice thing about such a process is that it might lead to publishing "scores" for at least some of the lay assessors and those scores could be published as Credibility SIgnals. High-scoring lay-assessors might then monetize their reputations by using them to obtain employment as paid, professional assessors... There is a whole eco-system of impacts that could arise from enabling broad community-based assessment of claims and credibility. bob wyman On Sun, Sep 5, 2021 at 2:53 PM Daniel Schwabe <dschwabe@gmail.com> wrote: > Hi all, > Just came across this is an interesting study, at least partially relevant > to our discussion - > https://www.wired.com/story/could-wisdom-of-crowds-help-fix-social-media-trust-problem/ > . > > It seems to agree with a study we did some years back, where we showed > that a non-expert crowd could match experts in assessing fraudulent > behavior in online marketplaces. > > Cheers > D > >
Received on Sunday, 5 September 2021 19:38:45 UTC