W3C home > Mailing lists > Public > public-credibility@w3.org > January 2021

Re: Why has CredWeb been silent when it is now needed more than ever?

From: Sebastian Lasse <mail@sebastianlasse.de>
Date: Fri, 22 Jan 2021 12:47:36 +0100
Message-Id: <CC6BF4B0-BA72-455F-AE70-222F3063C411@sebastianlasse.de>
Cc: public-credibility@w3.org
To: Annette Greiner <amgreiner@lbl.gov>
Hey Annette,

full ack that it is now needed more than ever !
Just joined here today.
Would also be happy if it becomes active again.

Wanted to let you know that we have built a “Policy Special Interest Group” within the Social Community Group.
While the initial focus was Interoperability, Interconnection & DataPortability within the EU Digital Markets Act,

> to develop a worldwide standard for vetting social media posts

would have been a good idea.
Now, in the ActivityPub ecosystem, mastodon seems to work on this with big EU funding here 
https://eunomia.social/EUNOMIA-D3.2-Architecture-SUBMITTED.pdf <https://eunomia.social/EUNOMIA-D3.2-Architecture-SUBMITTED.pdf>
(I was made aware of it by accident by an ORF [public broadcaster] colleague)
> - whether social media companies would be inclined to follow a recommendation, or would prefer to make their own guidance anyway
I mean if mastodon does not even work together with any other ActivityPub implementor here: I doubt :(
Also: “Companies” don't have a problem, Open Protocols have it.
When twitter recently banned 70.000 QAnon accounts, the misinformation rate dropped by 73% according to The Washington Post.
Good for Dorsey but if the QAnon guys join the fediverse now, we have the problem.

I would like to invite everyone to the next meeting (soon) on Saturday:
https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305 <https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305>
The above is the official ActivityPub forum.


> Am 21.01.2021 um 23:42 schrieb Annette Greiner <amgreiner@lbl.gov <mailto:amgreiner@lbl.gov>>:
> I can think of a few factors in response to Bob’s question, though I think it’s worth considering again what we can do, and I do have one idea. This is certainly a fair place for discussion of issues and their potential solution.
> What I think has happened with the group is that we completed the specific tasks that we had set ourselves, and many of us found work to address the issue with other organizations and projects. I think at least some of us experienced a degree of burnout over time as well. It’s probably worth pointing out, too, that the credibility issues arising since the November 2020 elections are qualitatively the same as existed before, and I believe the presidential campaign in the U.S. even saw a decrease in fake news sharing compared to four years ago. What has intensified of late is the polarization around conspiracy theories and the level of physical threat based on them. Those strike me as political issues more than technological ones, though I agree that we should still be looking for technological ways to limit sharing of fake news and help end users discern fact from fiction. Another important point is that the W3C is a worldwide entity, so for us to attempt to impose or expand limits on the jurisdiction of any specific country’s government is far beyond our charter. 
> That said, we can certainly incubate ideas that might find their way to becoming W3C recommendations. One idea that I’ve been ruminating is attempting to develop a worldwide standard for vetting social media posts. At present, the W3C doesn’t have the right participants to develop a recommendation, but I do think that many member organizations could nominate people with the appropriate background, and we could invite experts who have been looking at the ethical and social issues that are at play. If there were a standard for vetting posts, social media companies could perhaps breathe a sigh of relief, because they would no longer have to develop their own guidance, and they could point to a standard to explain how choices are made. Their risk in making those choices would be diminished if the competing platforms followed the same standard. End users would have a clearer understanding of what would be acceptable on platforms that embrace the standard, and they would also thereby gain at least some measure of assurance of credibility (or at least flagging of questionable content). Questions that come to mind at this stage are 
> - whether we could recruit the right group of people to deliver a reasonable recommendation
> - whether social media companies would be inclined to follow a recommendation, or would prefer to make their own guidance anyway
> - whether it makes more sense to develop a scale rather than a monolithic recommendation, and let platforms advertise the level to which they strive
> - how to ensure a recommendation that avoids undue censorship but also enables removal of dangerous content.
> This group seems like a place to at least begin thinking about such a recommendation.
> -Annette
>> On Jan 17, 2021, at 12:32 PM, Bob Wyman <bob@wyman.us <mailto:bob@wyman.us>> wrote:
>> If "the mission of the W3C Credible Web Community Group is to help shift the Web toward more trustworthy content without increasing censorship or social division" then why, during a period when issues with web credibility have never been more urgent, nor more broadly discussed, has this group remained silent?
>> In just the United States, since the November 2020 elections, we've seen the web exploited to distribute lies and assertions that contributed both to creating and amplifying social divisions which have weakened the foundations of the US system of government and that helped to motivate and justify a shocking attack on the US Capitol and Congress. Since the election, we've seen a growing chorus calling for private companies and "algorithms" to engage in censorship which would achieve through private government that which our public government is, and should be, constitutionally prohibited from imposing. And, we have seen private companies act in response to those calls... Through all this, CredWeb has been silent...
>> Why isn't this mailing list ablaze with calls to action, analyses of the problem, and proposals to address it? Is it the opinion of this group's members that all that can be done has been done? If so, do you really believe that there is nothing more that can be offered by technological means to "shift the web toward more trustworthy content?" Would discussion of these issues and their potential solutions be welcomed here?
>> If this is not the forum for the discussion of issues related to credibility of web content, then what is the correct forum for such discussions?
>> bob wyman

Received on Friday, 22 January 2021 11:49:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 22 January 2021 11:49:27 UTC