Re: Why has CredWeb been silent when it is now needed more than ever?

Annette,
Thank you for your response and for the apparent effort you put into it. I
do appreciate that this group has accomplished the specific tasks assigned
to it and I'm not surprised to hear of some "burnout." Standards
specification is hard. But I'm also pleased to see that you agree that it
still may be appropriate to incubate new ideas and thus new tasks.

Please forgive me if I am parsing your words too closely, but I am
concerned that your focus seems to be on addressing a need for standards
that social media platform providers could use to direct their own internal
content evaluation and moderation efforts.  While such standards might have
value, an alternate view, lately ascribed to Mike Masnick of TechDIrt
<https://www.techdirt.com/>, is that we should seek to rely on "Protocols
Not Platforms
<https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech>."
(Note: While Masnick's paper was written in 2019, it probably resonates
well with those who remember the principles that once drove the Internet's
and the Web's early design.) According to the "Protocols Not Platforms"
view, the focus should not be on perfecting the ability of the platform
providers to moderate content, but rather on the development of protocols
that would allow a distribution and democratization of the content
assessment, moderation, and curation functions. Among other things, such
protocols might do as Tom Jones <thomasclinganjones@gmail.com> suggested in
his response, and build on the "cred API to enable users to post creds that
could be discovered, or at least selected, by actions from relying
parties." By developing appropriate protocols, we may, in fact, be able to
ensure that the role of platforms in content moderation is at least
minimized if not, in many cases, eliminated. We may be able to ensure that
the platforms' content moderation systems are much more mechanical and much
less subjective.

My personal preference would be for a set of protocols that made it
relatively easy for any user to publish broadly discoverable statements
concerning the credibility, accuracy, or quality of any addressable
information resource. Given such a protocol, I think consumers of
information should then be provided with a means to individually select
which of a multitude of voices that they trust and to have their selections
influence what information is presented to them or how it is presented.
Consumers should also be able to see aggregations of the statements made by
all speakers. (i.e. "100K say this is True, 10K say it is False.") Of
course, users should be able to delegate the curation of lists of trusted
speakers to trusted curators. One might even choose to delegate to a
platform the task of selecting trusted voices. (i.e. One might delegate to
a "Facebook default-adult" list of trusted voices.) But, one should also be
able to select one or more alternative trusted voice curators (i.e. to a
"Science Facts Verification Society") or to do the work of voice curation
themselves. Ideally, we'd see the development of a variety of automated
systems for doing voice rating and correlation. Thus, a tool might allow
one to say: "In the absence of ratings from my primary trusted voice, use
ratings from voices that most often tend to agree with those I trust."
(This would be something like a "Web of Trust
<https://en.wikipedia.org/wiki/Web_of_trust>" for establishing credibility
rather for WOT's traditional use in authentication.)

Such a system would allow the platform providers to focus their content
moderation on enforcing platform-specific terms of use while leaving
broader issues, such as the moderation of opinion, to users' individual
decisions. This would, I think, be a good thing since it would avoid the
risk, complained of by many, that platforms will have an inappropriate
influence over the content of speech.

Of course, any solution to one problem creates another. If users are given
control over how their content will be moderated, then some may tend to
make decisions that reinforce "content bubbles." Some users, for instance,
would probably be quite happy to subscribe to voices that would suppress
the opinions expressed by believers of some specific political philosophy
or who belong to one or another ethnic group. This is certainly an issue
that should be carefully considered, however, I think it best to focus
first on how the needs of the "good" may be best addressed before allowing
the preferences of the "bad" to block progress.

We should also be concerned that facilitating the aggregation of an
individual's statements may create what some would consider to be a privacy
concern. If it is easy to discover and analyze hundreds of statements that
I have made, it will be fairly easy to develop a pretty comprehensive
knowledge of my opinions and preferences, both stated and unstated. But,
this may be unavoidable since largely the same conclusions might be easily
drawn from other evidence of my online activity.

I could go on at length, but first I'd like to ask if you think that this
kind of protocol-based solution, as an alternative and complement to
platform-based systems or standards, is something that could or should be
explored in this group. Is this the right context in which to explore and
develop such protocol-based approaches?

bob wyman


On Thu, Jan 21, 2021 at 5:42 PM Annette Greiner <amgreiner@lbl.gov> wrote:

> I can think of a few factors in response to Bob’s question, though I think
> it’s worth considering again what we can do, and I do have one idea. This
> is certainly a fair place for discussion of issues and their potential
> solution.
>
> What I think has happened with the group is that we completed the specific
> tasks that we had set ourselves, and many of us found work to address the
> issue with other organizations and projects. I think at least some of us
> experienced a degree of burnout over time as well. It’s probably worth
> pointing out, too, that the credibility issues arising since the November
> 2020 elections are qualitatively the same as existed before, and I believe
> the presidential campaign in the U.S. even saw a decrease in fake news
> sharing compared to four years ago. What has intensified of late is the
> polarization around conspiracy theories and the level of physical threat
> based on them. Those strike me as political issues more than technological
> ones, though I agree that we should still be looking for technological ways
> to limit sharing of fake news and help end users discern fact from fiction.
> Another important point is that the W3C is a worldwide entity, so for us to
> attempt to impose or expand limits on the jurisdiction of any specific
> country’s government is far beyond our charter.
>
> That said, we can certainly incubate ideas that might find their way to
> becoming W3C recommendations. One idea that I’ve been ruminating is
> attempting to develop a worldwide standard for vetting social media posts.
> At present, the W3C doesn’t have the right participants to develop a
> recommendation, but I do think that many member organizations could
> nominate people with the appropriate background, and we could invite
> experts who have been looking at the ethical and social issues that are at
> play. If there were a standard for vetting posts, social media companies
> could perhaps breathe a sigh of relief, because they would no longer have
> to develop their own guidance, and they could point to a standard to
> explain how choices are made. Their risk in making those choices would be
> diminished if the competing platforms followed the same standard. End users
> would have a clearer understanding of what would be acceptable on platforms
> that embrace the standard, and they would also thereby gain at least some
> measure of assurance of credibility (or at least flagging of questionable
> content). Questions that come to mind at this stage are
> - whether we could recruit the right group of people to deliver a
> reasonable recommendation
> - whether social media companies would be inclined to follow a
> recommendation, or would prefer to make their own guidance anyway
> - whether it makes more sense to develop a scale rather than a monolithic
> recommendation, and let platforms advertise the level to which they strive
> - how to ensure a recommendation that avoids undue censorship but also
> enables removal of dangerous content.
>
> This group seems like a place to at least begin thinking about such a
> recommendation.
> -Annette
>
> On Jan 17, 2021, at 12:32 PM, Bob Wyman <bob@wyman.us> wrote:
>
> If "the mission of the W3C Credible Web Community Group is to help *shift
> the Web toward more trustworthy content without increasing censorship or
> social division*" then why, during a period when issues with web
> credibility have never been more urgent, nor more broadly discussed, has
> this group remained silent?
>
> In just the United States, since the November 2020 elections, we've seen
> the web exploited to distribute lies and assertions that contributed both
> to creating and amplifying social divisions which have weakened the
> foundations of the US system of government and that helped to motivate and
> justify a shocking attack on the US Capitol and Congress. Since the
> election, we've seen a growing chorus calling for private companies and
> "algorithms" to engage in censorship which would achieve through private
> government that which our public government is, and should be,
> constitutionally prohibited from imposing. And, we have seen private
> companies act in response to those calls... Through all this, CredWeb has
> been silent...
>
> Why isn't this mailing list ablaze with calls to action, analyses of the
> problem, and proposals to address it? Is it the opinion of this group's
> members that all that can be done has been done? If so, do you really
> believe that there is nothing more that can be offered by technological
> means to "shift the web toward more trustworthy content?" Would discussion
> of these issues and their potential solutions be welcomed here?
>
> If this is not the forum for the discussion of issues related to
> credibility of web content, then what is the correct forum for such
> discussions?
>
> bob wyman
>
>
>
>

Received on Friday, 22 January 2021 01:54:16 UTC