Re: [EXT] Current solutions to prove an issuer is who they claim they are

On Sun, Jan 26, 2025 at 9:50 PM David Chadwick
<d.w.chadwick@truetrust.co.uk> wrote:
> The reason decentralised trust did not work for the majority of people is that it is too complex for them to understand. Sure, an expert minority used it to good effect, and there were PGP parties at the IETF where you could register as trustworthy people you did not know but who others in the room did know. But it was not scalable to global proportions nor usable by your average mum and dad. Centralised trust based on CAs is not perfect, but it has worked to global proportions without your mum and dad understanding it nor even aware that they are relying on it and using it daily.

There are multiple reasons why PGP didn’t scale, and complexity is
just one of them. Another significant factor is the lack of widespread
adoption - after the initial enthusiasm, many users realized they
simply didn’t need it back in the 90s. This was compounded by poor
tooling, inadequate user interfaces, and the lack of integration into
mainstream systems, over 20 years ago. End-user education and
awareness also played a role.

However, today’s world is different. I wouldn’t be so sure that the
average 20-year-old today would struggle with the concept of managing
multiple identities. In fact, the future of identity management will
likely involve automation, AI, and other technologies that handle
privacy and security on behalf of users. With these advancements, end
users wouldn’t need to be aware of what’s happening “behind the
scenes.”

What’s essential is avoiding a one-size-fits-all mindset when it comes
to trust ecosystems. There isn’t a single approach that can address
all scenarios. Instead, multiple trust frameworks will need to
coexist, with the appropriate framework determined by the specific use
case. For example:

* Verifying credentials to access a private residential zone might
rely on community-based verification.
* Authenticating a movie ticket could be handled using a fixed list of
issuers or even simple public-key verification (e.g., if the cinema
uses its "own/well-known" issuer)
* Verifying age for legal purposes might require integration with a
centralized government registry or an approved list of authorities.

The key is flexibility and interoperability, ensuring that different
frameworks can coexist and adapt to diverse needs.

> So both mechanisms are useful and both will be needed. Experts will build their own decentralised trust registries akin to the PGP model, and the GUI will be different, but the underlying concepts will be same. And I suggest that most people wont use it. Centralised trust registries will be used by the majority, and probably they wont be aware of them.

Not just one or the other, but all - organic diversity. I don’t share
your skepticism; the times have changed, and new challenges have
arisen. Malicious companies are already (and will increasingly) use
smart agents to harvest our data, while we’ll rely on smart agents to
protect us. Most users will simply set their desired level of privacy,
and everything else - including identity management - must happen
automatically, regardless of whether it’s centralized or
decentralized. The key is not to close ourselves off to one approach
but to begin exploring, evaluating, and documenting all meaningful
methods tailored to specific use cases. This will empower everyone to
choose the right solution for their needs. And I’m confident that many
are listening when it comes to privacy protection today.

Best regards,
Filip Kolarik
https://github.com/filip26
https://bsky.app/profile/filipk.bsky.social

>
> Scalability of computing systems is a research topic in its own right. Scalability of trust is one branch of this, and what works for you and me wont necessarily scale to global proportions.
>
> Kind regards
>
> David
>
> On 27/01/2025 05:17, Manu Sporny wrote:
>
> On Sat, Jan 25, 2025 at 12:14 PM Merul Dhiman <me@merul.org> wrote:
>
> I believe scams like this are less about integrity of data and more about human nature, it's more about the psychology of the victim and how these scammers prey on their weaknesses.
>
> On Sat, Jan 25, 2025 at 12:52 PM Filip Kolarik <filip26@gmail.com> wrote:
>
> there is no purely technology-based solution that can
> completely prevent scams relying on false claims of authenticity,
> identity, or emotional exploitation. These tactics tap into human
> psychology, making them difficult to counter with tools alone.
>
> On Sat, Jan 25, 2025 at 5:08 PM David Chadwick
> <d.w.chadwick@truetrust.co.uk> wrote:
>
> Surely you remember PGP and its model for decentralised trust. Unfortunately it did not work. Why Johnny Cant Encrypt is a good read for those new to this topic.
>
> Hmm, there seems to be some miscommunication going on. Let me try again:
>
> I acknowledge that there are aspects to the Fake Brad Pitt attack that
> are largely psychological and outside the realm of what we can address
> with technology. I also acknowledge that if we do our job well here
> that the attacks will just move to other weaker areas, and that's a
> win... because those weaker areas might eventually go away.
>
> For example, some of the more seasoned among us might remember when we
> read our credit card numbers out loud over the phone to retailers...
> the rarest vintages among us might remember the *Ka-chunk, ka-chunk*
> of a credit card imprint machine, which would copy all the information
> needed to pull money out of our bank account onto a piece of paper
> that would then be bandied about by a minimum wage employee with no
> security training. Those are historically weak attack surfaces that
> have been almost eradicated due to newer, more secure technology
> practices coupled with strong motivations (fees and fines) for doing
> things in the older, less secure way.
>
> "Why Johnny Can't Encrypt" is a good historical document (it's more
> than 20 years old now); there are some lessons in there, no doubt. It
> analyzed a system (PGP 5.0) that was released over 27 years ago. It's
> probably safe to say that A LOT has happened in security UX and
> practices since then.
>
> For example, Signal, happened... and it showed how you can have strong
> privacy preserving cryptography, some level of verified communication,
> all while not exposing individuals to any crypto-mumbo-jumbo. People
> DO maintain their own trust lists in Signal. Sure, it's not
> bulletproof, but it is an example of how far we've come from the "Why
> Johnny Can't Encrypt" days. I don't buy that citation as a reason why
> decentralization can't work when we have plenty of modern
> counter-examples.
>
> I'll also note that there are extremes here -- at one end, fully
> decentralized trust and at the other, fully centralized trust. I don't
> think anyone is arguing for any particular extreme. It sounds like
> most of us are saying: There will be a spectrum of trust registry
> solutions and there will not be a "one size fits all" solution or
> approach. So, we should expect multiple solutions in the market and we
> have to ensure that the technology we're building is capable of
> pulling from the full spectrum of solutions.
>
> What I was trying to convey was: If we don't make sure an individual
> (or systems operator) has ultimate control over who to trust and who
> not to trust (by either specifying it directly, or relying on one or
> more trust registries for their ecosystem), we're going to find
> ourselves in the Certificate Authority mess we're in today... with
> organizations seeking rents while not delivering  meaningful value...
> and with prices so artificially high that it is not possible for an
> individual to reasonably assert their identity online.
>
> -- manu
>

Received on Sunday, 26 January 2025 21:51:57 UTC