Re: exploring a crowdsourced certification trust metric with svg/javascript animation

* Dan Brickley <danbri@danbri.org> [2010-04-14 17:54+0200]
> On 14 Apr 2010, at 17:09, Dan Connolly <connolly@w3.org> wrote:
> >I wrote a little piece of code to simulate growth of
> >a social network; [...]

> I like this approach, and expect this data-driven exploration (even
> if simulated data) could help get reputation-based, distributed
> trust out of the eternal "someday pile".
> 
> However in everyday business, sheer unadulterated evil ( Spam, fraud
> etc ie. generally clear-cut mischief) is one huge problem amongst
> many. Mailing lists, blogs etc can also suffer from
> over-enthusiastic participants whose contributions aren't quite
> right (in volume, tone or theme) for that forum. I am curious how
> far into that space we can go, and whether a common design can
> handle the spam problem and also help with fuzzier questions of
> authority, trust and interestingness. How far do you see it going in
> that direction? Is this just for 'evil people', or also 'foolish
> actions'?

I have always envisioned this working something like:

Each contribution to the community would allow an up or downvote
from anyone in the community; a contribution could be anything
from an email message to a blog comment to a wiki or spec edit.

Users would accrue reputation points based on these votes, where
the amount of influence anyone has is proportional to their
existing karma within the community. (so a +1 from a spammer is
basically worthless, or possibly even negative)

If influence is proportionate to existing karma, we'd need to
seed things somehow; maybe in the context of W3C it could stem
from a single person (timbl, or some overall community manager),
or W3C staff or chairs could each be assigned 100 karma points.

Then they just need to upvote a few people to start distributing
karma among all contributors. Once we have a bit of reputation
data we can start using it to do things like block spam, and to
discover the more interesting contributions to high-volume forums.

I have used a number of sites that allow community ratings like
this (e.g. slashdot, reddit), but their underlying workings have
always been a bit of a mystery to me. I wonder if anyone has
published a comparison of various algorithms used by sites like
advogato, slashdot et al.

oh, taking a look at the advogato trust metric just now, I see
it uses trusted seeds as well:

    The computation of the trust metric is performed relative to a
    "seed" of trusted accounts. At the time of this writing (22 Feb
    2000), the seed consists of raph, miguel, federico, and alan.
    -- http://www.advogato.org/trust-metric.html

I try to keep track of related things on this page
http://impressive.net/people/gerald/2005/08/reputation.html
but the more I get into it, the more I realize how little I know.

e.g. "How badly designed reputation systems create in-game mafias"
http://boingboing.net/2009/10/07/how-badly-designed-r.html
says it's a bad idea to publish negative karma scores which
wouldn't have occurred to me.

and apparently there's an entire book about this stuff
http://buildingreputation.com/doku.php

(I like to think a good algorithm could marginalize the amount
of damage that could be done something like the sims mafia.)

Maybe we can just start building systems at W3C that allow
up/downvotes, then play with various algorithms to see what
works, and observe what patterns emerge over time.

(and when nogoodniks come along and try to game the system,
we change the game ;) i.e. tweak the algorithm

-- 
Gerald Oskoboiny     http://www.w3.org/People/Gerald/
World Wide Web Consortium (W3C)    http://www.w3.org/
tel:+1-604-906-1232             mailto:gerald@w3.org

Received on Wednesday, 14 April 2010 22:51:32 UTC