W3C home > Mailing lists > Public > public-socialweb@w3.org > March 2015

Re: Distributed architecture and social justice / at risk individuals

From: Bassetti, Ann <ann.bassetti@boeing.com>
Date: Sat, 21 Mar 2015 22:43:22 +0000
To: Christopher Allan Webber <cwebber@dustycloud.org>, Evan Prodromou <evan@e14n.com>
CC: "public-socialweb@w3.org" <public-socialweb@w3.org>
Message-ID: <20150321224321.5628046.81569.24644@boeing.com>
Chris -- rather than wait for a more 'politically correct' person, I think it'd be valuable if you'd write the story!

Thanks for thinking about this! -- Ann

Ann Bassetti
From: Christopher Allan Webber
Sent: Saturday, March 21, 2015 9:33 AM
To: Evan Prodromou
Cc: public-socialweb@w3.org
Subject: Re: Distributed architecture and social justice / at risk individuals

Evan Prodromou writes:

>> 2. not all of us are sysadmins - I can set up a VPS, but being able to
>> set one up securely is a profession all on its own.
> Users can have hosting options; it doesn't have to be one or the other.
> There are trade-offs.
>> 3. lack of filtering tools - no ability to reduce line noise from
>> people spamming. The service may as well actually be offline if
>> you have to sift through large volumes of putrid hate speech before
>> you can read anything from your friends and loved ones
> There's no reason that the social service the user uses can't
> incorporate filtering tools. For self-hosting people, they can use a
> third-party spam filter. Akismet is one that works for blog comments in
> this same topology; E14N runs one called spamicity.info for pump.io and
> GNU Social users.
> That said, fine-grained tools don't exist now for pump.io or GNU social.
> I think that having the option to use them is pretty important!

So this came up in my talk at LibrePlanet, someone asked about this in
the Q&A section, and there was some hallway talk afterwards. Previously
I think this would happen only at a layer outside of the protocol, but
others pointed out that the very tooling we're building now could help
with anti-abuse, probably without adjustments to anything other than
adding new vocabulary and specifying its side effects. For instance,
you could federate information about known abusive users so you can help
mitigate harassment between servers.

I'm not sure whether or not it belongs in the core vocabulary but if
well thought through I think I would like it. The main thing is that
maybe it will be hard to get the implementation right; maybe it should
be implemented as an extension for the vocabulary as we test it and then
get it involved mainline.

But Jessica Tallon has pointed out, even outside of the anti-harassment
concerns, we probably want the technical primitives for this anyhow.
Consider that multiple users probably should have permission to edit the
same collection; we need some way to have access control for such a
thing assuming we want to support such a feature. Adding verbs for
block/mute on top of this are pretty reasonable, and not hard to picture
as in terms of implementation. Perhaps we should have all of those in
the core vocabulary?

I know we don't have any user stories for this but I wish we had one.
Maybe it would be nice to try to think one through? (I don't think I
should be the one to submit it, but I'd like to get help on it; it might
be helpful to get a user story from someone <way to say "who's not a
white dude and is more at risk here">?)

- Chris
Received on Saturday, 21 March 2015 22:44:03 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 8 December 2016 15:48:20 UTC