On 18/08/2021 00:59, Chris Gough wrote:

On Tue, 17 Aug 2021 at 02:14, Henry Story <henry.story@gmail.com> wrote:

> I would much rather have my own due diligence robot that hid out of sight
> except to warn me when something smelled like trouble.

Human beings are very good at evaluating contextual information.
That is what we were evolved to do. The Web has nearly no such information,
other than that we arrived at a link by following another link.

I think we are agreeing that more rich contextual data is better than less (or none), and that directing user's attention to relevant contextual data is a design problem.

Maybe a good UI would start with strong conventions about what information should be surfaced under different situations.

I imagine this design would be subject to evolutionary pressure from chaotic growth in application semantics. At some point the complexity of the data will mean top-down heuristics won't be able to compete with bottom-up machine learning approaches. Because people are individuals, it seems natural to approach the application as an assembly of personal software agents that each represent one of the user's identities (the collection of knowledge they have in a particular socio-economic context). Maybe not, that's just where my head went first. Under the user's control and direction, I had imagined each agent collaborating with their community to form a decentralised institution based on collective credential sharing (like a collaborative spam filter, but generalised for any shared value semantic). The UI of that application would aggregate the collective intelligence of the agents to direct the user's attention to details (such as incongruities or irregularities) that seem worthy of attention. A kind of cognitive prosthetic that directs users attention to the most interesting and relevant parts of a vast quantity of contextual data.

It is the ”just a bootstrapping” part I have a problem with.
Fair enough, thanks for pulling me up. "just" is a four letter word that I use to hide the foolishness of an idea from myself.
We now need a way to ground it in real legal and social
institutions. We need to allow those institutions of knowledge
to stand out from the others, to allow long term games of trust
to emerge:

Yes, but I don't think that's exactly what the long game is.
game theoretical dynamics change dramatically if one
is playing only one game or engaged in long term with others.

I have concerns about the erosion of trust in public institutions, perhaps caused by silicone golems relentlessly optimising their control over narratives for profit (and defeating legitimate institutions in asymmetric propaganda wars, quite possibly by accident). I'm excited by the idea that situating free agents in a large graph of VCs will enable them to change how narratives are controlled and managed, based on weight of evidence rather than volume of the signal. This might reduce people's need for "trust fiats" (because they have a better ability to make good decisions on a prima-facie basis) but it could also help create more objectively trustworthy institutions with more compelling qualifications.

Of course the existing institutions are huge reservoirs of value, they hold most the current social value even if they are slowly leaking. The best mechanism for allowing legitimate institutions to stand out might be to have them very well represented in deep credential graphs so their true role in society is obvious. But until these large VC graphs exist, we need to find a pragmatic way to leverage the existing value in legitimate institutions. The web is bootstrapped already, but the verifiable web is not.

Chris Gough

The content of this email and attachments are considered confidential. If you are not the intended recipient, please delete the email and any copies, and notify the sender immediately.  The information in this email must only be used, reproduced, copied, or disclosed for the purposes for which it was supplied.

X.509 (2016) introduced the 4 cornered trust model to replace the previous 3 cornered trust model. This provides symmetry and balance, with subjects purchasing PKCs from CAs and RPs purchasing insurance policies from Trust Brokers. The latter will indemnify the RPs against losses suffered from using a fraudulent or poorly administered TLS web site in return for an annual premium. OS and browser based root CA trust lists will no longer be used (since these offer no indemnities). Whether this business model is feasible or not is too early to tell. It will take an existing legitimate institution to invest in offering this service to end users (probably initially for free or very low cost) and capturing enough market share to be profitable in the long run. I remember when CAs first started in the 1990s, the UK Post Office offered trustworthy PKCs from their CA for a fee (with full indemnities to the subject), whilst Verisign offered PKCs from its CA for free. The PO soon went out of business, and when Verisign had sufficient users it started to charge for its PKCs and make a profit. Is any organisation willing to invest in this model for RPs of either PKCs or VCs or both?

Kind regards