Re: Distributed architecture and social justice / at risk individuals

Christopher Allan Webber writes:

> Evan Prodromou writes:
>
>>>   2. not all of us are sysadmins - I can set up a VPS, but being able to
>>>      set one up securely is a profession all on its own.
>> Users can have hosting options; it doesn't have to be one or the other. 
>> There are trade-offs.
>>>   3. lack of filtering tools - no ability to reduce line noise from
>>>      people spamming.  The service may as well actually be offline if
>>>      you have to sift through large volumes of putrid hate speech before
>>>      you can read anything from your friends and loved ones
>> There's no reason that the social service the user uses can't 
>> incorporate filtering tools. For self-hosting people, they can use a 
>> third-party spam filter. Akismet is one that works for blog comments in 
>> this same topology; E14N runs one called spamicity.info for pump.io and 
>> GNU Social users.
>>
>> That said, fine-grained tools don't exist now for pump.io or GNU social. 
>> I think that having the option to use them is pretty important!
>
> So this came up in my talk at LibrePlanet, someone asked about this in
> the Q&A section, and there was some hallway talk afterwards.  Previously
> I think this would happen only at a layer outside of the protocol, but
> others pointed out that the very tooling we're building now could help
> with anti-abuse, probably without adjustments to anything other than
> adding new vocabulary and specifying its side effects.  For instance,
> you could federate information about known abusive users so you can help
> mitigate harassment between servers.
>
> I'm not sure whether or not it belongs in the core vocabulary but if
> well thought through I think I would like it.  The main thing is that
> maybe it will be hard to get the implementation right; maybe it should
> be implemented as an extension for the vocabulary as we test it and then
> get it involved mainline.
>
> But Jessica Tallon has pointed out, even outside of the anti-harassment
> concerns, we probably want the technical primitives for this anyhow.
> Consider that multiple users probably should have permission to edit the
> same collection; we need some way to have access control for such a
> thing assuming we want to support such a feature.  Adding verbs for
> block/mute on top of this are pretty reasonable, and not hard to picture
> as in terms of implementation.  Perhaps we should have all of those in
> the core vocabulary?
>
> I know we don't have any user stories for this but I wish we had one.
> Maybe it would be nice to try to think one through?  (I don't think I
> should be the one to submit it, but I'd like to get help on it; it might
> be helpful to get a user story from someone <way to say "who's not a
> white dude and is more at risk here">?)
>
>  - Chris

One other route for this is to do client-side filtering using machine
learning and bayesian filters that help users sort out what stuff they
want to read and what type of stuff seems like abuse.

Here's some indications that this might be feasible:

 - Anti-troll machine learning
   http://www.technologyreview.com/view/536621/how-a-troll-spotting-algorithm-learned-its-anti-antisocial-trade/
   and associated paper
   http://arxiv.org/abs/1504.00680
   http://arxiv.org/pdf/1504.00680v1.pdf
 - Collaborative filtering with privacy
   http://www.cs.berkeley.edu/~jfc/%27mender/IEEESP02.pdf

Probably worth researching...
 - Chris

Received on Sunday, 12 April 2015 23:16:17 UTC