Re: Rango WoN Re: Public consultation on EU digital principles

On Tue, 17 Aug 2021 at 02:14, Henry Story <> wrote:

> > I would much rather have my own due diligence robot that hid out of sight
> > except to warn me when something smelled like trouble.
> Human beings are very good at evaluating contextual information.
> That is what we were evolved to do. The Web has nearly no such
> information,
> other than that we arrived at a link by following another link.

I think we are agreeing that more rich contextual data is better than less
(or none), and that directing user's attention to relevant contextual data
is a design problem.

Maybe a good UI would start with strong conventions about what information
should be surfaced under different situations.

I imagine this design would be subject to evolutionary pressure from
chaotic growth in application semantics. At some point the complexity of
the data will mean top-down heuristics won't be able to compete with
bottom-up machine learning approaches. Because people are individuals, it
seems natural to approach the application as an assembly of personal
software agents that each represent one of the user's identities (the
collection of knowledge they have in a particular socio-economic context).
Maybe not, that's just where my head went first. Under the user's control
and direction, I had imagined each agent collaborating with their community
to form a decentralised institution based on collective credential sharing
(like a collaborative spam filter, but generalised for any shared value
semantic). The UI of that application would aggregate the collective
intelligence of the agents to direct the user's attention to details (such
as incongruities or irregularities) that seem worthy of attention. A kind
of cognitive prosthetic that directs users attention to the most
interesting and relevant parts of a vast quantity of contextual data.

It is the ”just a bootstrapping” part I have a problem with.

Fair enough, thanks for pulling me up. "just" is a four letter word that I
use to hide the foolishness of an idea from myself.

> We now need a way to ground it in real legal and social
> institutions. We need to allow those institutions of knowledge
to stand out from the others, to allow long term games of trust
> to emerge:

Yes, but I don't think that's exactly what the long game is.

> game theoretical dynamics change dramatically if one
> is playing only one game or engaged in long term with others.

I have concerns about the erosion of trust in public institutions, perhaps
caused by silicone golems relentlessly optimising their control over
narratives for profit (and defeating legitimate institutions in asymmetric
propaganda wars, quite possibly by accident). I'm excited by the idea that
situating free agents in a large graph of VCs will enable them to change
how narratives are controlled and managed, based on weight of evidence
rather than volume of the signal. This might reduce people's need for
"trust fiats" (because they have a better ability to make good decisions on
a prima-facie basis) but it could also help create more objectively
trustworthy institutions with more compelling qualifications.

Of course the existing institutions are huge reservoirs of value, they hold
most the current social value even if they are slowly leaking. The best
mechanism for allowing legitimate institutions to stand out might be to
have them very well represented in deep credential graphs so their true
role in society is obvious. But until these large VC graphs exist, we need
to find a pragmatic way to leverage the existing value in legitimate
institutions. The web is bootstrapped already, but the verifiable web is

Chris Gough


The content of this email and attachments are considered 
confidential. If you are not the intended recipient, please delete the 
email and any copies, and notify the sender immediately.  The information 
in this email must only be used, reproduced, copied, or disclosed for the 
purposes for which it was supplied.

Received on Tuesday, 17 August 2021 23:59:53 UTC