Re: Human readable credentials?

>
> > Human-friendly representations are derived from the machine-friendly
> ones, keeping them in sync
>
> >
>
> That is most certainly one way to approach the problem.  But it is not the
> only one.  For example, I can show you how to deterministically produce a
> machine readable representation from a human-friendly one.
>

Agreed. I wasn't claiming otherwise. What makes either approach helpful is
that the correspondence is guaranteed.


> > Putting a B64 encoded block into a valid credential is not introducing a
> lot of new risk if the processing entity for that chunk is still software.
>
> >
>
> Sorry but I have to disagree with you on that one.  If one processor knows
> how to decode that B64 block and present it and another processor does not
> – which is perfectly acceptable since I can have custom contexts in my VC’s
> – then you have the same situation you have pointed out.
>

You are correct. I was assuming it was the *same* software interpreting
both parts.


> > The problem arises when there are two potentially divergent
> representations, and the two processing entities are disjoint. *That* is an
> exploitable gap.
>
> >
>
> That is **ONLY** an exploitable gap **IF** the two representations are
> physically separate from each other **AND** not signed/sealed together.
> However, if the two are signed as a single entity, then you can’t modify
> one w/o invalidating the other, the preventing any form of exploit.
>

This seems to contradict your previous statement "If one processor knows
how to decode... and another processor does not...". Physical separation
isn't what creates the gap, as you pointed out. The problem is different
and independent processing. This could happen even if two representations
are in the same physical file and signed as a unit (your example about
custom contexts).

>

Received on Monday, 8 June 2020 20:38:31 UTC