Re: Utah State-Endorsed Digital Identity (SEDI) legislation

Steffen,

You are unintentionally proving my point 😅

Let’s start by specifying who, exactly, needs to trust something.

In any setting, at any time, trust is always established by a *verifier*: a
verifier evaluates claims by *verifying signatures* made by other actors.

*NOTE:* Any actor can be in the role of a verifier in any setting.

Second invariant: you cannot guarantee that a remote request came from any
specific actor without cryptography — *never*. Therefore “where the
credential came from” does not add trust by itself; the verifier only gains
trust from what can be *verified* cryptographically.

Given that, what is actually needed from legislation?

First: a directive defining what counts as a *legally binding trust anchor*.
For example: any signature with a capability that can prove access to a
physical ID in a way that is later verifiable by anyone. This does *not*
require a (Q)TSP. selx.xyz is an example of this approach. Also, it should
be obvious, but I’ll state it explicitly: there is *no need for PID
disclosure at all* here.

Second: a directive stating for example that *NIST-recommended algorithms
and key lengths*, valid at the time of signature, must be used.

For “verifiable wallet trust” from the POV of the wallet user (the
individual), a similar approach works: “a recommended implementation from
an academic or technical standardization body must be used,” and the fact
that specific source code is being used can itself be verified
cryptographically.

Even with these laws, “trust” still does not magically exist. Trust only
happens when a verifier uses these requirements as conditions for *code
execution* (i.e., accept/reject decisions).

So the role of legislation is to define *TRUST ANCHORS* — what must be
verified — and those anchors should be dynamic, not static absolutes: they
must be continuously improved by technologists.

Something like this the following is not necessary, as it does not even
“prove” any trust in real time to anyone in a verifiable way:

> QTSP becomes QTSP becomes certified by an independent CAB, accredited by
an independent accreditation body, supervised by a democratically
legitimated independent body, and operating based on European Standards
developed by an independent SDO under European law — everything provable by
independent courts.

 There is no meaningful difference between which website, app, or operating
system runs the algorithms. You cannot prove what remote computer ran what
code, just that they have access to some secret value.

Another invariant: I can, at best, verify what code is being run *on my
computer, right now*.

So wallet trust, which is at best verified on a local machine (is not
perfect there either), could — instead of relying on selective approved
bodies — be handled as *signed code execution *for standardized "wallet
code".

Now let’s run a scenario. Tell me what part needs certified (centralized,
or a set of centralized) private-sector actors for trust:

I provide an online ERP service.

Two of my users have edited a contract in a real-time editor, and now want
a legally binding digital signature.

During onboarding, I call a standardized, open-source frontend API with
signed execution that is widely available on any machine.

It collects GovID info from the user via MRZ, then verifies the data by NFC
scanning the chip in the GovID, reaching cryptographic verification that
the claims in the GovID are correct and certified by a government body.
This is done once per device via the OS, and the OS implementations follow
an OPEN W3C standard — also the user doesn’t have to repeat this beyond
once per device.

Then I input the contract data to the API. It prompts user verification;
the user uses biometrics or a PIN — whatever method of frontend user
verification. The details get signed and sent to my platform’s third-party
queue. Once both parties have signed and I have verified the signatures, I
deliver the contracts to the other parties. They can verify the signatures
as well. And the signed contract stays in the user’s custody, because it is
their liability — and they get to choose who to disclose it to, and under
what conditions, for backup for example.

Same with age verification or anything similar.

There is only one centralized trust anchor here: the verification material
for physical IDs (for example held on an EU blockchain or similar).

Any platform or app can do that kind of queueing flow, there could even be
a web standard for "giving up mutual agreements"

So: stop doing more than necessary, and stop centralizing more than
necessary, or creating weird auditing groups and extra bureaucracy when
this can be handled with cryptography and decentralized primitives that
move with the individual.

Now you might ask: what if they lose their physical ID? Doesn’t matter —
they get a new one. The signatures remain verifiable.

What if it is stolen? Same as with credit cards: you can kill it. That
event gets timestamped; signatures up to that point in time remain valid,
etc.

That is just one example. The key goal should be: *decentralize trust
anchors as much as possible and let active groups of technologists set the
standards / requiremtns*

Do not extend the legislation aspect beyond what is nessecary.

Regards,
Jori

>

Received on Saturday, 14 February 2026 15:30:48 UTC