Who can signal and would signals of "controversy" be useful?

In his statement of candidacy
<https://lists.w3.org/Archives/Public/public-credibility/2021Jul/0043.html>,
Drew Wallace mentions a desire to prepare a listing of 5-20 "Endorsed
Credibility Signals." He proposes an initial set, including:

   - Age of Website
   - Claim of "Personal Contact" with author
   - Verified Physical Address
   - Presence of Corrections Policy Statement

One interesting attribute of this set is that they are all "verifiable" in
one way or another. Also, other than the "Claim of Personal Contact," they
are signals whose truth might often be objectively, or even mechanically,
determined. I wonder:

   - Who would be considered authorized to make statements about the
   objectively verifiable claims? Who can or should be the one to create these
   signals?
   - What consideration has been given to non-verifiable, often subjective,
   signals such as star ratings, text comments, etc. that might be associated
   with some content by, for instance, users of annotation systems?

It seems to me that even the best and most useful signals will have little
value if the number of signal generators is limited. The scale of online
publishing and claims-making is so great that we can't expect any
reasonably small number of signallers to evaluate more than a tiny fraction
of all the resources whose credibility might be questioned. On the other
hand, if we allow anyone to create these signals, and thus increase the
likelihood that a useful number of resources are marked with signals, we'll
have to address a number of additional issues:

   - How do we establish the credibility of a signaller when creating a
   specific kind of signal at any particular time? Are there good proposals
   circulating on how to do that?
   - Given that the various signals probably present different challenges
   for establishing signaller credibility, should the means for establishing
   the credibility of signallers be dealt with as part of describing the
   individual signals?
   - Are there algorithmic means that can be used to resolve or inform the
   evaluation of issues that arise from conflicting signals? (i.e. If you can
   verify a physical address, but I cannot, how do we resolve this conflict?
   Might it be simply because we made our verification attempts at different
   times or used different verification resources? Should signals have a
   temporal scope within which they are considered to be valid? Should signals
   provide for the provision of "proof" or "evidence" of their correctness?)

I realize that more subjective signals are more difficult to use. Something
like a star rating, a thumbs-up/down flag, or text comment, is going to be
hard to evaluate. On the other hand, it seems to me that quantities of
these things can have value even if individual instances of such signals
have little or none. For instance, if I see a plausible sounding web page
that has 10,000 "False" flags and 12,000 "True" flags, that isn't a strong
indication of the credibility of the page, but it does indicate to me that
there is some substantial controversy concerning the page's content. Seeing
the existence of controversy, I might engage in a heightened scrutiny of
the claims made, etc. In other words, a means to extract a signal that
implies that merely credibility is more than usually questionable may be
just as valuable to me as a signal that directly supports or denies
credibility.

What would be effective signals of challenged credibility? If an annotation
system, such as Hypothes.is, <https://web.hypothes.is/> supported creating
signals as structured annotations, and thus ensured that just about anyone
could signal on just about any public resource, how could we turn that mass
of low-reliability signals into something more useful?

bob wyman

Received on Friday, 30 July 2021 20:56:09 UTC