bluesky -- reputation: modeling and back-propagation of error

I've recently been chatting to the folks at Bluesky and there seems to be
lots of interesting ideas in that group

A couple of threads that join together.  Firstly, recent talk of agents
naturally leads to agent life cycles and towards working out whether an
agent can simply read to the web or write.  One aspect of writing is access
control lists.  Another is whether you trust an agent to do so.  Some time
ago we folded the trust and reputation community group into this one, tho
we've not done a whole lot of work in that area

I found this blog post contained some more modern ideas, and food for
thought.  A lot to digest, but perhaps some good pointers to some practical
techniques to determine whether an agent is trusted to write to a
read/write space:

https://hackernoon.com/blueskyprint-tki3z63

*Reputation: modeling and back-propagation of error*

Granular reputation in a decentralized space is a key problem and may
require some novel solutions. The end result must be an endpoint that
provides a reputation score for a user or message. To get there, possible
innovations might include

- a global blockchain of credibility-staking assertions of direct knowledge
- ie I saw this; I know who reported it; etc
- local credibility models including Havelaar immediate-web calculations,
and Iris circle analysis for external links
- encouragement of signal-rich protocol and UI features beyond 'likes' ; ie
shared bookmarks
- live random 'juries' to anchor source of truth with strong
back-propagation (learn from Aragon implementation)
- manual recognition of anchors for source of truth ie organizations like
Snopes, DBpedia. (could be customized)
- measure human-or-not, geolocation and other simple and provable
assertions to anchor credibility.
- Wikipedia-like community of moderators with reputation scores. Especially
valuable for multiple-language content streams. Moderators encouraged to
debate quality of sources.
- retroactive trust propagation - after the truth of an issue is
established, retroactively adjust credibility of sources of false reports
(ie Khashoggi killing is a good example)
- 'undercover hoaxes' - ie intentional misinformation and tracking of the
response to it may be valuable for evaluating arbiters. Obviously this must
be done in a way careful not to cause harm, might be in cooperation with
third parties.

All of the above would feed into AI models for determining reputation.
Automated model retraining could include rules-based adjustments to
connections strengths based on high-cost manual determinations.

In general, models should include the notion of 'first hand observer' vs
'reporter at n hops' of real world truths, and should model in some cases
the existence of a real-world truth of simple statements that can inform
the credibility of judges of more complex areas.

In addition, funding should be allocated for a hotline for cases in which
individuals are in immediate physical danger.

Received on Friday, 4 June 2021 19:55:29 UTC