Re: creating a decentralized web of trust

On 25 July 2015 at 21:34, Melvin Carvalho <melvincarvalho@gmail.com> wrote:

> I've been working lately on creating an identity provider based on the
> github API
>
> In weaving the web, timbl wrote: "The trust engine is the most powerful
> sort of agent on the Semantic Web" and Im trying to look for ideas on how
> to create such a thing.  Note also that this group incorporated the web of
> trust group some time back.  I think when reading and writing to the web
> it's going to be increasingly important to know whether or not you can
> trust someone with write access.
>
> So, Github provides a number of social signals:
>
> - followers
> - date joined
> - link to email/homepage
> - repositories you are a member of
> - project contributions
> - how many of your projects are starred
> - how frequently you have worked
>

So far I have managed to get followers data, and am now getting the star
data.

I was thinking of maybe using the rating system of "slashdot" to be able to
allow people to rate each other.  And then provide a context of the
rating.  They use the following system:

NormalThe default setting attached to every comment when you have
moderation privileges.OfftopicA comment which has nothing to do with the
story it's linked to (song lyrics, obscene ascii art, etc).FlamebaitComments
whose sole purpose is to insult and enrage.TrollA Troll is similar to
Flamebait, but slightly more refined. This is a prank comment intended to
provoke indignant (or just confused) responses.RedundantRedundant posts add
no new information; they take up space with information either in the
original post, the attached links, or lots of previous comments.InsightfulAn
Insightful comment makes you think, or puts a new spin on a given story.
Examples: an analogy you hadn't thought of, or a telling counterexample.
InterestingIf you believe a comment to be Interesting (and on-topic), it is.
InformativeInformative comments add new information to explain the
circumstances hinted at by a particular story, fill in "The Other Side" of
an argument, etc.FunnyChoose "Funny" if you think the comment is
*actually* funny,
not just because it seems intended to be.OverratedSometimes comments are
disproportionately up-moderated—this probably means several moderators saw
it at nearly the same time, and their cumulative scores exaggerated its
merit. (Example: A knock-knock joke at +5, Funny.) Such a comment is
Overrated.UnderratedLikewise, some comments get smashed lower than they
might deserve. Choosing "Underrated" means you think it should be read by
more people.

Does that seem reasonable?



>
> And a few more.  I am looking to see how to combine these facts to get a
> signal score between 0% - 100% as a rough rating, which I can then publish.
>
> My algorithm so far is quite basic so far, and only a starting point
>
> I multiply the #followers * 3 up to a maximum of 30 followers.  e.g.
>
> http://gitpay.org/torvalds -- 90%
> http://gitpay.org/stratus -- 9 followers = 27%
>
> I am looking for ideas on how to improve this algorithm, or maybe find a
> set of algorithms people can choose from to get out a trust score (however
> i am scpetical people will have time to code them).
>
> The other problem I see is.  You could have a great reputation on twitter,
> but only 1-2 followers on github that would then not be indicative of
> overall trust.
>
> One question I've been thinking about is "should older accounts be trusted
> more than new ones?"
>
> Would be interested if there were any thoughts on this.
>

Received on Sunday, 2 August 2015 17:05:13 UTC