Re: Utah State-Endorsed Digital Identity (SEDI) legislation

Steffen,

1)  A verifier is always determined by the conditions of applicable use
case. Means those conditions define if something accepted or nor or in case
of legally binding signature: The law defines what`s legally binding and no
verifier can´t change those rules if we want to have legally binding
signatures

I'm pretty sure nothing I said contradicts that statement. I was just
emphasizing that the existence of a law does not assert any trust to any
transaction, if the following of that law is not verifiable by the
verifier. And cryptography is the only solution really.

2)  origin can be made evident in several ways

Yeah none of which are verifiable, and can be spoofed at anytime by anyone.
I did not read the article you attached because any suggestions beyond a
cryptographic agreement is not worth consideration. If there is some real
way in the article it uses cryptography. For example fetching DNS and using
the public key to verify the origins signature. It does not prove that the
origin is doing anything right It just proves I navigated to some origins
resource and that might be audited by everyone, and is trustworthy, but
also unnecessary. And essentially just implies that the trust comes from
some centralized registry stating here is a singed value you can trust, not
the actual singed value. And the registry could still in theory present any
emulated value instead of a real value.

3)  See Section II eIDAS and Art. 22. QTSP (which reference to ETSI TS 119
612 which defines the technical trust anchor) is independent from trust
anchor as the trust anchor is the LOTL. In case of ToE (e.g. PID provider)
it`s ETSI TS 119 602)

I mean it is cool that the law covers the role it should, but it doesn't
mean it is covered correctly.

My definition of correctly being: setting only the absolute minimum
required invariants to realistically achieve the goals a law has.

This might be true for the law in question, but that makes it even more
concerning :- p

AI SUMMARY OF WHAT A TRUST ANCHOR IS IN THE CONTEXT OF EUDI:

In the *EUDI / eIDAS* world, the practical “trust anchor” is *not* a QTSP
itself but the *official, signed registry data that lets verifiers discover
and validate who/what is trusted*: Member States publish *Trusted Lists* of
qualified trust service providers (QTSPs) and their services, and the *European
Commission publishes the EU “List of the Lists” (LOTL)* that points to
those national lists and provides the material needed to validate them
under *eIDAS Article 22*—so a verifier can bootstrap trust from *the list +
its signing certs/metadata*, then verify signatures/status based on that.
For the EUDI Wallet ecosystem, the same pattern gets generalized via
ETSI’s *“Lists
of Trusted Entities” (LoTE)* data model (*ETSI TS 119 602*), intended to
support publishing approval/status lists for things like *PID providers,
wallet providers, wallet relying-party access certificate providers, and
public-sector EAA issuers*, i.e., more “who is approved” registries that
function as verifier-consumable trust anchors.
,,,,,
Why does there need to be a list of entities doing the right things, when
there could just be real time verifiability of any entity doing the right
thing, that doesn't include a certification process, but is self certifying.


4) PID disclosure needed if you want to have legally binding signatures in
Europe as legally binding requires non-repudiable link to signatory -
impossible with identification in beginning. But yes it´s not necessary to
use your PID you can even do it using other mechanism (see Art. 24 (1)) and
yes you could have pseudonymous certificate for privacy preserving
signature.

I do not get what you mean by this completely but, I at least meant beyond
the parties that already have the PID by default: individual and the trust
anchor, others can do just fine with only pseudonyms. If that is how it
works then great.

5) Already exists called ETSI TS 119 312 or SOGIS as NIST does not apply in
Europe. And algorithm valid at time of signature is not sufficient
especially not if you need to proof the signature in 30 years to fulfill
legal requirements e.g. CFR Part 21, EASA Part 11 - means preservation of
signature needed.

Yeah the signature strength preservation part makes sense. How it is
handled is another thing, is it up to the liable party to keep it up trough
multiple available methods innovated by market. Or a central registry
managing all contracts ever that can compromise the entire legal system if
leaked :-p

 Also I was not suggesting that EU does not do anything... I said I was
saying it is doing too much.

6) Many text on QTSP but recommend to look in certification requirements in
ETSI EN 319 401 and QTSP specific standards like ETSI EN 319 411-1 in case
issuing certificates. It`s difference if you run something in notebook or
QTSP runs something in proven secure environment with proven governance &
processess and e.g. use certified QSCD acc. CEN EN 419 241.

AI SUMMARY OF WHAT A PROVEN SECURE ENVIRONMENT IS:

In the EUDI/QTSP world, a *“proven secure environment”* means the trust
service (e.g., issuing certificates or producing qualified signatures) is
operated inside an independently assessed, tightly controlled environment
that meets formal security-and-process requirements: documented governance
(roles, separation of duties, change control, incident handling), strong
physical/logical access controls, audit logging and monitoring, and
regulated key-management so private keys are generated/held/used under
controlled conditions—typically aligned to ETSI policy/security
requirements for trust service providers and CAs (like *ETSI EN 319 401*
and *ETSI EN 319 411-1*) and often backed by certified crypto hardware /
*QSCDs* evaluated against protection profiles such as *CEN EN 419 241*—so
the assurance is about a governed, audited operational regime, not about
trusting arbitrary software someone ran on an unmanaged machine.

See how a secret value used for cryptographic proof is the actual source of
trust here as well, it is not like they are running any software that
regular devices cannot run.

Also you actually still cannot verify it it was even ran on "certified
crypto hardware" without clear identification in advance:
 > cryptography is one and only with clear identification in advance you
have non-repudiable proof of origin

And this once again creates a security liability in comparison to
decentralization, rotating the hardware bound value can be quite hard, so
once it is guessed the whole system is once again compromised.

What makes you even think that a end-user-device secure enclave is any less
secure? It is more secure because it is the same level of security scoped
to the liability holder, not every liability ever.

7)  "There is only one centralized trust anchor here: the verification
material for physical IDs (for example held on an EU blockchain or
similar)."

Nope, there`s not only one trust anchor on an EU Blockchain - even in case
of EBSI. See https://hub.ebsi.eu/vc-
<https://hub.ebsi.eu/vc-framework/trust-model/issuer-trust-model-v3>
Framework/trust-model/issuer-trust-model-v3.
<https://hub.ebsi.eu/vc-framework/trust-model/issuer-trust-model-v3> Also
the LOTL is distributed trust anchor see ETSI TS 119 612


Well, that's a bit sad, but what I was arguing is that instead a blockchain
audited like a PROVEN SECURE ENVIRONMENT could be the one centralized
registry that has the material required for verifying the claims with trust
derived from a physical Id in my example...

8)  "what if they lose their physical ID? Doesn’t matter — they get a new
one. The signatures remain verifiable What if it is stolen? Same as with
credit cards: you can kill it. That event gets timestamped; signatures up
to that point in time remain valid, etc" Yes because we need to
differentiate between PID/eID, QEAA and QES ;-)
Differentiation and separation of concerns is a great practice and I
advocate for it. As a separation of concerns QEAA and QES could be a global
technical standard tho with governance specific trust anchors...

9)  We already have decentralized trust anchors as also TL de facto
decentralized and active SDO setting standards. But if we
want to use the technology in regulated environments it might be helpful to
meet the law as well to avoid liability and other
legal risks.

You have decentralized legally binding trust anchor to a selected group
declared on a List of List which is not decentralization at all...

> So exciting times and thanks everybody for this great discussion

I'm also having a great time getting to think deep against solid counter
arguments!
I'm learning a lot 😊

Regards,
Jori




la 14.2.2026 klo 18.06 Steffen Schwalm (Steffen.Schwalm@msg.group)
kirjoitti:

> Jori,
>
> Maybe it was intentionally ;-)
>
> Beside this:
>
>
>    1. A verifier is always determined by the conditions of applicable use
>    case. Means those conditions define if something accepted or nor or in case
>    of legally binding signature: The law defines what`s legally binding and no
>    verifier can´t change those rules if we want to have legally binding
>    signatures
>
>
>
>    2.
>
>    "where the credential came from” does not add trust by itself; the
>    verifier only gains trust from what can be *verified*
>     cryptographically."
>
>    Not necessarily as origin can be made evident in several ways -
>    cryptography is one and only with clear identification in advance you have
>    non-repudiable proof of origin - in eIDAS with QSeal acc. Section 5 eIDAS
>    resp. ETSI EN319 411-1 (@Dr. Detlef Hühnlein
>    <detlef.huehnlein@ecsec.de> please correct me if I`m wrong.
>
>    Details:
>    https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/ElekSignatur/esig_pdf.pdf?__blob=publicationFile
>
>
>
>    3.
>
>     directive defining what counts as a *legally binding trust anchor*.
>
>    See Section II eIDAS and Art. 22. QTSP (which reference to ETSI TS 119
>    612 which defines the technical trust anchor) is independent from trust
>    anchor as the trust anchor is the LOTL. In case of ToE (e.g. PID provider)
>    it`s ETSI TS 119 602)
>
>
>
>    4. PID disclosure needed if you want to have legally binding
>    signatures in Europe as legally binding requires non-repudiable link to
>    signatory - impossible with identification in beginning. But yes it´s not
>    necessary to use your PID you can even do it using other mechanism (see
>    Art. 24 (1)) and yes you could have pseudonymous certificate for privacy
>    preserving signature.
>
>
>
>    5.
>
>    "a directive stating for example that *NIST-recommended algorithms and
>    key lengths*, valid at the time of signature, must be used."
>
>    Already exists called ETSI TS 119 312 or SOGIS as NIST does not apply
>    in Europe. And algorithm valid at time of signature is not sufficient
>    especially not if you need to proof the signature in 30 years to fulfill
>    legal requirements e.g. CFR Part 21, EASA Part 11 - means preservation of
>    signature needed.
>
>
>
>    6. Many text on QTSP but recommend to look in certification
>    requirements in ETSI EN 319 401 and QTSP specific standards like ETSI EN
>    319 411-1 in case issuing certificates. It`s difference if you run
>    something in notebook or QTSP runs something in proven secure environment
>    with proven governance & processess and e.g. use certified QSCD acc. CEN EN
>    419 241.
>
>
> May you please explain how you achieve similar security level of QSCD
> certified acc, CC-PP in CEN EN 419 241 ony your computer?
>
>
>    7.
>
>    "There is only one centralized trust anchor here: the verification
>    material for physical IDs (for example held on an EU blockchain or
>    similar)."
>
>
> Nope, there`s not only one trust anchor on an EU Blockchain - even in case
> of EBSI. See https://hub.ebsi.eu/vc-
> <https://hub.ebsi.eu/vc-framework/trust-model/issuer-trust-model-v3>
> Framework/trust-model/issuer-trust-model-v3.
> <https://hub.ebsi.eu/vc-framework/trust-model/issuer-trust-model-v3> Also
> the LOTL is distributed trust anchor see ETSI TS 119 612
>
>
>    8.
>
>    "what if they lose their physical ID? Doesn’t matter — they get a new
>    one. The signatures remain verifiable What if it is stolen? Same as with
>    credit cards: you can kill it. That event gets timestamped; signatures up
>    to that point in time remain valid, etc"
>    Yes because we need to differentiate between PID/eID, QEAA and QES ;-)
>
>
> 9) "decentralize trust anchors as much as possible and let active groups
> of technologists set the standards / requiremtns"
>
> We already have decentralized trust anchors as also TL de facto
> decentralized and active SDO setting standards. But if we
> want to use the technology in regulated environments it might be helpful
> to meet the law as well to avoid liability and other
> legal risks
>
>
>
> Best
> Steffen
>
> ------------------------------
> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
> *Gesendet:* Samstag, 14. Februar 2026 16:30
> *Bis:* Steffen Schwalm <Steffen.Schwalm@msg.group>
> *Cc:* Christopher Allen <ChristopherA@lifewithalacrity.com>; Manu Sporny <
> msporny@digitalbazaar.com>; public-credentials@w3.org <
> public-credentials@w3.org>
> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>
> *Caution:* This email originated from outside of the organization.
> Despite an upstream security check of attachments and links by Microsoft
> Defender for Office, a residual risk always remains. Only open attachments
> and links from known and trusted senders.
>
> Steffen,
>
> You are unintentionally proving my point 😅
>
> Let’s start by specifying who, exactly, needs to trust something.
>
> In any setting, at any time, trust is always established by a *verifier*:
> a verifier evaluates claims by *verifying signatures* made by other
> actors.
>
> *NOTE:* Any actor can be in the role of a verifier in any setting.
>
> Second invariant: you cannot guarantee that a remote request came from any
> specific actor without cryptography — *never*. Therefore “where the
> credential came from” does not add trust by itself; the verifier only gains
> trust from what can be *verified* cryptographically.
>
> Given that, what is actually needed from legislation?
>
> First: a directive defining what counts as a *legally binding trust
> anchor*. For example: any signature with a capability that can prove
> access to a physical ID in a way that is later verifiable by anyone. This
> does *not* require a (Q)TSP. selx.xyz is an example of this approach.
> Also, it should be obvious, but I’ll state it explicitly: there is *no
> need for PID disclosure at all* here.
>
> Second: a directive stating for example that *NIST-recommended algorithms
> and key lengths*, valid at the time of signature, must be used.
>
> For “verifiable wallet trust” from the POV of the wallet user (the
> individual), a similar approach works: “a recommended implementation from
> an academic or technical standardization body must be used,” and the fact
> that specific source code is being used can itself be verified
> cryptographically.
>
> Even with these laws, “trust” still does not magically exist. Trust only
> happens when a verifier uses these requirements as conditions for *code
> execution* (i.e., accept/reject decisions).
>
> So the role of legislation is to define *TRUST ANCHORS* — what must be
> verified — and those anchors should be dynamic, not static absolutes: they
> must be continuously improved by technologists.
>
> Something like this the following is not necessary, as it does not even
> “prove” any trust in real time to anyone in a verifiable way:
>
> > QTSP becomes QTSP becomes certified by an independent CAB, accredited by
> an independent accreditation body, supervised by a democratically
> legitimated independent body, and operating based on European Standards
> developed by an independent SDO under European law — everything provable by
> independent courts.
>
>  There is no meaningful difference between which website, app, or
> operating system runs the algorithms. You cannot prove what remote computer
> ran what code, just that they have access to some secret value.
>
> Another invariant: I can, at best, verify what code is being run *on my
> computer, right now*.
>
> So wallet trust, which is at best verified on a local machine (is not
> perfect there either), could — instead of relying on selective approved
> bodies — be handled as *signed code execution *for standardized "wallet
> code".
>
> Now let’s run a scenario. Tell me what part needs certified (centralized,
> or a set of centralized) private-sector actors for trust:
>
> I provide an online ERP service.
>
> Two of my users have edited a contract in a real-time editor, and now want
> a legally binding digital signature.
>
> During onboarding, I call a standardized, open-source frontend API with
> signed execution that is widely available on any machine.
>
> It collects GovID info from the user via MRZ, then verifies the data by
> NFC scanning the chip in the GovID, reaching cryptographic verification
> that the claims in the GovID are correct and certified by a government
> body. This is done once per device via the OS, and the OS implementations
> follow an OPEN W3C standard — also the user doesn’t have to repeat this
> beyond once per device.
>
> Then I input the contract data to the API. It prompts user verification;
> the user uses biometrics or a PIN — whatever method of frontend user
> verification. The details get signed and sent to my platform’s third-party
> queue. Once both parties have signed and I have verified the signatures, I
> deliver the contracts to the other parties. They can verify the signatures
> as well. And the signed contract stays in the user’s custody, because it is
> their liability — and they get to choose who to disclose it to, and under
> what conditions, for backup for example.
>
> Same with age verification or anything similar.
>
> There is only one centralized trust anchor here: the verification material
> for physical IDs (for example held on an EU blockchain or similar).
>
> Any platform or app can do that kind of queueing flow, there could even be
> a web standard for "giving up mutual agreements"
>
> So: stop doing more than necessary, and stop centralizing more than
> necessary, or creating weird auditing groups and extra bureaucracy when
> this can be handled with cryptography and decentralized primitives that
> move with the individual.
>
> Now you might ask: what if they lose their physical ID? Doesn’t matter —
> they get a new one. The signatures remain verifiable.
>
> What if it is stolen? Same as with credit cards: you can kill it. That
> event gets timestamped; signatures up to that point in time remain valid,
> etc.
>
> That is just one example. The key goal should be: *decentralize trust
> anchors as much as possible and let active groups of technologists set the
> standards / requiremtns*
>
> Do not extend the legislation aspect beyond what is nessecary.
>
> Regards,
> Jori
>

Received on Saturday, 14 February 2026 18:11:23 UTC