Re: Utah State-Endorsed Digital Identity (SEDI) legislation

Hi Steve,

Thanks for the detailed breakdown. I want to clarify that my mention of
Sirraya One was purely illustrative — to explain the mechanism of proving
DID ownership via challenge-response and registry linkage. It was never
intended as a product pitch, nor does it require anyone to vet or adopt our
implementation. The goal was to show that simple, cryptographically
grounded approaches can work in practice, even at scale.

I also want to emphasize a broader point: the world is not limited to EU,
US, or UN-centric approaches. India alone represents billions of internet
users in highly diverse conditions, and our implementations are tested
under real-world constraints — high scale, fragmented connectivity, and
heterogeneous devices. Lessons from these environments often differ from
theoretical or lab-driven models.

Finally, while trade cases like the ones you describe are more complex, the
fundamental principle holds: trust can be embedded in the cryptographic
links between DID, VCs, and registries. Multi-party verification is an
extension of the same mechanism, not a separate conceptual challenge.

I’m happy to continue discussing use cases and mechanisms, but my intention
here is to focus on practical, field-tested approaches, not organizational
validation or alignment.


Regards,

Amir


On Mon, 16 Feb 2026 at 2:13 PM, Steffen Schwalm <Steffen.Schwalm@msg.group>
wrote:

> CEN EN 419 241 is no law but the technical specification according to
> which the QSCD is build and implemented
>
> Means you still lack of valid explanation alongside the standard how your
> proposed attack should work
>
> Best
> Steffen
>
>
> ------------------------------
> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
> *Gesendet:* Montag, 16. Februar 2026 09:38
> *Bis:* Steffen Schwalm <Steffen.Schwalm@msg.group>
>
> *Cc:* NIKOLAOS FOTIOY <fotiou@aueb.gr>; Joe Andrieu <joe@legreq.com>;
> Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com>;
> Manu Sporny <msporny@digitalbazaar.com>; Filip Kolarik <filip26@gmail.com>;
> public-credentials <public-credentials@w3.org>
> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>
> *Caution:* This email originated from outside of the organization.
> Despite an upstream security check of attachments and links by Microsoft
> Defender for Office, a residual risk always remains. Only open attachments
> and links from known and trusted senders.
> Law's requirements ARE NOT A LOGICAL STOP OF BEHAVIOUR.
>
> LAW IS NOT A COMPONENT FOR TECHNICAL THREATH MODELLING.
>
> But I think I have shown enough. Anyone can do with it what they want.
> Next I will work on a solution that actually fulfills the goals the EU
> Legislation has👍
>
> ma 16.2.2026 klo 10.35 Steffen Schwalm (Steffen.Schwalm@msg.group)
> kirjoitti:
>
> They can`t extract the keys without notice. See CEN EN 419241
> ------------------------------
> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
> *Gesendet:* Montag, 16. Februar 2026 09:30
> *An:* Steffen Schwalm <Steffen.Schwalm@msg.group>
> *Cc:* NIKOLAOS FOTIOY <fotiou@aueb.gr>; Joe Andrieu <joe@legreq.com>;
> Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com>;
> Manu Sporny <msporny@digitalbazaar.com>; Filip Kolarik <filip26@gmail.com>;
> public-credentials <public-credentials@w3.org>
> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>
>
> *Caution:* This email originated from outside of the organization.
> Despite an upstream security check of attachments and links by Microsoft
> Defender for Office, a residual risk always remains. Only open attachments
> and links from known and trusted senders.
> They can's extract the keys used for signing material. There is the
> signature only upon certain data presented. But that is software layer.
> They can use other software to interact with the hardware.
>
> ma 16.2.2026 klo 10.27 Jori Lehtinen (lehtinenjori03@gmail.com) kirjoitti:
>
> I actually spent last night reading about it, how it works and what
> components there are there is nothing stopping the (Q)TSP from using the
> hardware in their custody...
>
> ma 16.2.2026 klo 10.26 Steffen Schwalm (Steffen.Schwalm@msg.group)
> kirjoitti:
>
> Jori,
>
> May you please alongside the CEN EN 419 241 how EUDI "llowing a remote
> signing flow that allows a potentially malicious actor within the (Q)TSP
> use the privateKey representing you (not extract... use) to sign and
> fabricate any history they want that would remain verifiable in court,
> while making the local QSCD (Qualified Signature Creation Device) require a
> weird certificate"
>
> The QSCD contains verifiable hard- and software bound not only to keys you
> control.
>
> It would make it much easier to discuss if you could show where exactly in
> QSCD you see the issue.
>
> Thx
>
>
> ------------------------------
> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
> *Gesendet:* Montag, 16. Februar 2026 09:22
> *Bis:* NIKOLAOS FOTIOY <fotiou@aueb.gr>
> *Cc:* Joe Andrieu <joe@legreq.com>; Kyle Den Hartog <kyle@pryvit.tech>;
> Adrian Gropper <agropper@healthurl.com>; Manu Sporny <
> msporny@digitalbazaar.com>; Steffen Schwalm <Steffen.Schwalm@msg.group>;
> Filip Kolarik <filip26@gmail.com>; public-credentials <
> public-credentials@w3.org>
> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>
> *Caution:* This email originated from outside of the organization.
> Despite an upstream security check of attachments and links by Microsoft
> Defender for Office, a residual risk always remains. Only open attachments
> and links from known and trusted senders.
> Nikos,
>
> Does the EUDI protect the user by allowing a remote signing flow that
> allows a potentially malicious actor within the (Q)TSP use the privateKey
> representing you (not extract... use) to sign and fabricate any history
> they want that would remain verifiable in court, while making the local
> QSCD (Qualified Signature Creation Device) require a weird
> certificate instead of verifiable software behaviour with information only
> bound to a item you control by default that probably won't have a
> convinient API widely available (HUGE ASSUMPTION ON THE AVAILABILITY)
> resulting in remote signing being the default, and what is worse is that
> you as an individual cannot contribute to the durability of items required
> to verify your claims.
>
>
> --------------------------------------------------------------------------------------------------
>
> Model GPT 5.2 Extended Thinking + Web Search
> AI Refined answer below and source here:
> https://chatgpt.com/share/6992d17b-8af4-8009-abad-c4b6d66e5909
> <https://chatgpt.com/share/6992d17b-8af4-8009-abad-c4b6d66e5909>
>
> What you are also missing is that you as a user are in the role of a
> verifier.
>
> Help me strenghten this response with references to the EU legislation: >
> > More dangerous is the fact that your advocacy creates a false sense of
> security, literally telling people something is secure when it is not.
> Seriously, your email here is a dangerous recommendation. For anyone
> reading, please DO NOT think that approved browser lists actually prevent
> "unapproved" browser access. > > The truism that you can't trust the client
> is not just a web phenomenon or my opinion; it's a deep cybersecurity
> principle. You might want to argue with me, but I suggest you do some
> research before arguing against the combined wisdom of 50+ years of
> cybersecurity experience. > > Seriously, search for "cybersecurity can't
> trust the client" and you'll see a wealth of diverse opinions explaining in
> various terms why you actually can't trust the client in cyberspace. > >
> All boils down to who you want to protect. EUDI tries to protect the user.
> Lists of trusted software is fundamental when you are trying to protect the
> user. Government officials are recommended to use the Signal App and not
> any app claiming to use the OTR protocol. The Tor project recommends users
> to use the Tor browser and explicitly states "Using Tor with other browsers
> is dangerous and not recommended”. The EUDI DOES NOT try to protect the
> verifiers. Verifiers do not learn which wallet the user is using and the
> EUDI ARF explicitly prohibits this (see in Annex 2 of ARF "A Wallet Unit
> SHALL present a WUA only to a PID Provider or Attestation Provider, as part
> of the issuance process of a PID or a key-bound attestation, and not to a
> Relying Party or any other entity.”) Best, Nikos ---> Nikos, Does the EUDI
> protect the user by allowing a remote signing flow that allows a
> potentially malicious actor within the (Q)TSP use the privateKey
> representing you (not extract... use) to sign and fabricate any history
> they want that would remain verifiable in court, while making the local
> QSCD (Qualified Signature Creation Device) require a weird certificate
> instead of verifiable software behaviour with information only bound to a
> item you control by default that probably won't have a convinient API
> widely available (HUGE ASSUMPTION ON THE AVAILABILITY) resulting in remote
> signing being the default, and what is worse is that you as an individual
> cannot contribute to the durability of items required to verify your
> claims. What you are also missing is that you as a user are in the role of
> a verifier. Regards, Jori
> Stopped thinking
> To elaborate. What you are also missing is that in systems that respect
> the you the user also give you the role of a verifier when there are things
> you need to trust.
> Thought for 2m 25s
>
> Here’s a tighter, more “EU-text-backed” version you can send. I kept your
> core argument and anchored the contentious parts to the actual *eIDAS /
> eIDAS2* wording, plus the EUDI ARF requirement you cited.
> ------------------------------
>
> Nikos,
>
> I agree with you on the *goal* (protecting the user), but I think you’re
> mixing two different security questions:
>
>    1.
>
>    *Protecting the user from malicious software / phishing / spoofed
>    wallets*
>    2.
>
>    *Protecting other parties (and the user-as-a-party) from claims that
>    are “legally true” even if the signing environment was compromised or
>    misused*
>
> The European Digital Identity Wallet framework is explicitly user-centric:
> it is meant to provide secure access to services “*while having full
> control over their data*,” and the wallet must let the user
> request/obtain/store/present data “*under the sole control of the user*.”
> It also bakes in anti-tracking requirements (no transaction
> tracking/linking/correlation unless explicitly authorised by the user).
>
> So yes: *lists/certification are a reasonable tool for user protection* in
> that model (the EU even mandates publication of a list of *certified* EUDI
> Wallets).
> And the ARF goes further in privacy terms: it says a Wallet Unit must
> release a WUA only to a PID Provider or Attestation Provider—*not to a
> Relying Party or any other entity*.
>
> But none of that answers the *other* question: *what protects against
> misuse of signing capability—especially in remote signing models—when the
> verifier (and the user) cannot independently validate how the key was used?*
>
> eIDAS is extremely clear about the invariant it wants: advanced signatures
> must be created using signature-creation data that the signatory can use “*under
> his sole control*.”
> And Annex II requires that signature-creation data “*can be reliably
> protected by the legitimate signatory against use by others*.”
>
> Now the important bit: the amended eIDAS text explicitly contemplates
> *remote* qualified signature creation devices and services. It states
> that generating/managing signature-creation data (and even duplicating it
> for backup) is carried out *on behalf of the signatory* and *by a
> qualified trust service provider* providing a qualified service for
> managing a *remote* QSCD.
> That is a legal/assurance model that fundamentally depends on the QTSP’s
> controls being correct—because neither the relying party nor the signatory
> can typically *cryptographically* verify the QTSP’s internal enforcement
> in the way they can verify a signature.
>
> Yes, the regulation piles on supervision (audits at least every 24 months,
> supervisory powers, breach notification “*no later than 24 hours*,” etc.).
> But that’s *compliance assurance*, not *cryptographic verifiability*.
>
> Finally, the part I think you’re missing:
>
> *In systems that “respect the user,” the user is not only
> protected—they’re also put in the role of a verifier whenever something
> must be trusted.* The eIDAS2 wallet text actually reflects that: Member
> States must provide validation mechanisms that allow *users* to verify
> the authenticity/validity of relying parties, and the wallet must provide
> logs/dashboards so the user can see who they interacted with.
>
> So when we talk about “trusted software lists,” we should be precise: they
> can help protect the *user* from bad clients—but they do not magically
> make the client “trustworthy,” and they do not resolve the
> verifier/user-verifier problem in remote signing trust models.
>
> Regards,
> Jori
>
> ------------------------------------------------------------------------------------------------------
>
> There is no end-to-end verifiable way ensure the integrity of a remote
> signing service. Audits don't help when they can write the history.
>
> Upon reading the AI refined answer. I think it is obvious the *current
> implementations break EU LAW!!*
>
> The you cannot trust the client principle applies here!
>
> The individuals device is there server relying on trusted behaviour from a
> "client" for wich there is no technically valid way to ever truly
> guarantee, as demonstrated by the CURL discussion.
>
> Regards,
> Jori
>
>
> ma 16.2.2026 klo 9.42 NIKOLAOS FOTIOY (fotiou@aueb.gr) kirjoitti:
>
>
> >
> > More dangerous is the fact that your advocacy creates a false sense of
> security, literally telling people something is secure when it is not.
> Seriously, your email here is a dangerous recommendation. For anyone
> reading, please DO NOT think that approved browser lists actually prevent
> "unapproved" browser access.
> >
> > The truism that you can't trust the client is not just a web phenomenon
> or my opinion; it's a deep cybersecurity principle. You might want to argue
> with me, but I suggest you do some research before arguing against the
> combined wisdom of 50+ years of cybersecurity experience.
> >
> > Seriously, search for "cybersecurity can't trust the client" and you'll
> see a wealth of diverse opinions explaining in various terms why you
> actually can't trust the client in cyberspace.
> >
> >
>
> All boils down to who you want to protect. EUDI tries to protect the user.
> Lists of trusted software is fundamental when you are trying to protect the
> user.  Government officials are recommended to use the Signal App and not
> any app claiming to use the OTR protocol. The Tor project recommends users
> to use the Tor browser and explicitly states "Using Tor with other browsers
> is dangerous and not recommended”.
>
> The EUDI DOES NOT try to protect the verifiers. Verifiers do not learn
> which wallet the user is using and the EUDI ARF explicitly prohibits this
> (see in Annex 2 of ARF "A Wallet Unit SHALL present a WUA only to a PID
> Provider or Attestation Provider, as part of the issuance process of a PID
> or a key-bound attestation, and not to a Relying Party or any other
> entity.”)
>
> Best,
> Nikos
>
>

Received on Monday, 16 February 2026 08:48:40 UTC