- From: Sandro Hawke <sandro@w3.org>
- Date: Fri, 22 Jan 2021 13:54:41 -0500
- To: Christopher Guess <cguess@gmail.com>
- Cc: Credible Web CG <public-credibility@w3.org>
- Message-ID: <5bf0d346-90fe-bbe4-7a0d-06befbe745e9@w3.org>
This topic is quite relevant and current for the SocialCG, as Sebastian said. I suggest people interested in cross-platform social media moderation attend their meeting tomorrow <https://www.w3.org/wiki/SocialCG#Next_meeting>. Members of CredWeb are welcome to attend, I'm told. It's using a platform called BBB which you may want to get familiar with before the meeting. Related, folks might want to check out eunomia <https://eunomia.social>, which includes modifying mastodon for better handling of misinformation. Sebastian and I were at a talk they gave a couple days ago. - Sandro On 1/22/21 12:54 PM, Christopher Guess wrote: > Hello everyone, it’s been awhile since I last commented on this > channel, but now that the tone is turning down a bit on the fact > checking side I wanted to say a few words and share a thought or two > in response to the ideas on this thread. > > First, around moderation: The first thing to remember, as we’ve been > reminded here, is that the W3C is a global organization, so any talk > of what is acceptable to moderate should be looked at in a global > context. This of course presents difficulties due to the fact that > morality and cultural standards vary wildly between different > countries, regions, and communities. > > It’s been mentioned that a user-based voting and self-regulation > protocol system could be a remedy here, but what’s being proposed, to > my ears, actually sounds exactly like the system that Parler had > implemented. In their system any flagged post would have five random > accounts assigned to vote on if it was appropriate. This, as we’ve > seen, did not work out in the long run for them. It would instead lead > to the most active users (those most radical in my experience) being > the lone voices of “reason” in the forums. Even Reddit, which at least > has a somewhat heavier, but still distributed hand, eventually had to > step in and shut down the most vile subreddits due to the moderators > condoning the actions of the users. > > Second: When it comes to protocols over platforms, I have to ask, if I > was working at a social media organization: how does adopting a > protocol in any way limit my liability? Agreeing on standards to share > information does nothing to prevent someone in a country where Section > 230 doesn’t exist from suing me for allowing the information on my > system in the first place. Though I am not a lawyer, I imagine saying, > “Well, someone else said it was ok,” is almost certainly not going to > hold up in UK or German court. Given a lack of liability shielding I > can’t imagine any for-profit (non-Fediverse) social network giving up > their information via a global protocol unless they get something out > of it. > > OK, so, what do we do about this? The honest answer from my > perspective is: I find more problems with a standards-based approach > than solutions. In the end we are at best preaching to the choir, and > at worst screaming into the void. Those people that use platforms that > would follow such standards are the least likely to actually need the > moderation in the first place. I can’t imagine StormFront or the > successor to Parler or Gab caring even a little about a white paper > and what Twitter does. If anything, it gives them more followers. The > real way forward, as I see it, is beyond the scope of this chain, but > involves sociologists, economists and a severe change to 1st amendment > interpretation in the United States. > > Instead, because this group does care, perhaps we scope this down and > bit of a smaller piece of the pie? While the W3C scope is global, > perhaps this group can focus locally. Instead of claiming to be a > panacea for all moderation issues, focus on just getting the Mastodon > system on board. The system already shares data by default, and gives > the runners of each instance full moderation control. Essentially, by > putting in a sharable moderation system we’re piggybacking on what has > already been built and standardizing that while expanding on it. It > may not be the perfect system, but it’s a starting point at least and > 1.) Already has buy-in by programmers and 2.) is something actively in > use at scale already and 3.) is open source, so the whole process can > happen in the open without the smoke and mirrors of dealing with the > large tech companies. > > We make it a point to not even mention we want to be an example to the > large social media orgs, or part of a wider solution, but that > instead, we’re partnering with groups that we share values with to do > just a bit of good in the world. If it works, perhaps we can move > forward from there, but even getting some solution into the Mastodon > protocol and standards written for that single use case would be a > huge leap forward. > > Thanks for reading, and I hope you all stay safe, sane, and have a > wonderful weekend. > > -Chris > > -Christopher Guess > cguess@gmail.com > US/WhatsApp/Signal: +1 262.893.1037 > PGP: AAE7 5171 0D81 B45B – https://keybase.io/cguess > On Jan 22, 2021, 10:51 AM -0500, Tom Jones > <thomasclinganjones@gmail.com>, wrote: >> Question - I assumed that this group was responsible for CredMan - is >> that correct or does that live somewhere else? >> >> Be the change you want to see in the world ..tom >> >> >> On Fri, Jan 22, 2021 at 7:26 AM Dan Brickley <danbri@google.com >> <mailto:danbri@google.com>> wrote: >> >> On Fri, 22 Jan 2021 at 14:54, Sandro Hawke <sandro@w3.org >> <mailto:sandro@w3.org>> wrote: >> >> On 1/21/21 8:53 PM, Bob Wyman wrote: >>> >>> I could go on at length, but first I'd like to ask if you >>> think that this kind of protocol-based solution, as an >>> alternative and complement to platform-based systems or >>> standards, is something that could or should be explored in >>> this group. Is this the right context in which to explore >>> and develop such protocol-based approaches? >> I think that's more or less the group's mission. >> >> The problem is, we don't have people participating in the >> group who are building such systems. It's generally a mistake >> to try to create a standard without participation from people >> developing viable products which will use the standard. I've >> helped people make that mistake several times in the past and >> it's not good. It's somewhat related to the architecture >> astronaut problem. >> <https://www.joelonsoftware.com/2001/04/21/dont-let-architecture-astronauts-scare-you/> >> >> I am, myself, building such a system. Unfortunately, I don't >> currently know anyone else who is. I also don't know if it >> can become a viable product. Until there are several other >> people who are independently building this stuff, I don't see >> a way for standards-type work to proceed. >> >> >> >> That sounds about right. >> >> I still believe a big part of the difficulty here is also that >> online credibility is kind of an arms race, so those seeking to >> be recognized as credible will be paying close attention to any >> putative standard or protocol, which makes developing such things >> collaboratively in an open way problematic. >> >> The CG has at times been an interesting forum for discussion, >> though, and some good has come out of that. Maybe there's >> value to re-starting meetings like that. >> >> >> Even just as a meeting place for folks who want to find >> like-minded collaborators, a community group has value... >> >> All the best, >> >> Dan >> >> Most recently, I was imagining us having presentations by >> folks developing credibility products, and maybe coming up >> with a review process. In particular, I was thinking about >> how we could push every project on the "why should people >> trust you?" question. A proper architecture (like CAI) can >> answer this question in a way that closed apps can't. >> Crunchbase has 500+ companies with the keyword "credibility", >> 9000+ with the keyword "trust", and 59 with the keyword >> "misinformation". [I haven't gone through the 59. Clearly >> some like snopes and blackbird are about combating misinfo; >> others, like Natalist, are just making reference to how there >> is misinformation in their target market.] >> >> Is there a story that would get, say, 20 of those 59 to be >> interested in interoperating? I've only talked to a few of >> them, and I wasn't able to think of a serious argument for >> how their business would benefit from going open-data. It >> might be worth trying some more. >> >> >> -- Sandro >> >>
Received on Friday, 22 January 2021 18:54:44 UTC