Re: Utah State-Endorsed Digital Identity (SEDI) legislation

Nikos,

Does the EUDI protect the user by allowing a remote signing flow that
allows a potentially malicious actor within the (Q)TSP use the privateKey
representing you (not extract... use) to sign and fabricate any history
they want that would remain verifiable in court, while making the local
QSCD (Qualified Signature Creation Device) require a weird certificate
instead of verifiable software behaviour with information only bound to a
item you control by default that probably won't have a convinient API
widely available (HUGE ASSUMPTION ON THE AVAILABILITY) resulting in remote
signing being the default, and what is worse is that you as an individual
cannot contribute to the durability of items required to verify your claims.

--------------------------------------------------------------------------------------------------

Model GPT 5.2 Extended Thinking + Web Search
AI Refined answer below and source here:
https://chatgpt.com/share/6992d17b-8af4-8009-abad-c4b6d66e5909
<https://chatgpt.com/share/6992d17b-8af4-8009-abad-c4b6d66e5909>

What you are also missing is that you as a user are in the role of a
verifier.

Help me strenghten this response with references to the EU legislation: > >
More dangerous is the fact that your advocacy creates a false sense of
security, literally telling people something is secure when it is not.
Seriously, your email here is a dangerous recommendation. For anyone
reading, please DO NOT think that approved browser lists actually prevent
"unapproved" browser access. > > The truism that you can't trust the client
is not just a web phenomenon or my opinion; it's a deep cybersecurity
principle. You might want to argue with me, but I suggest you do some
research before arguing against the combined wisdom of 50+ years of
cybersecurity experience. > > Seriously, search for "cybersecurity can't
trust the client" and you'll see a wealth of diverse opinions explaining in
various terms why you actually can't trust the client in cyberspace. > >
All boils down to who you want to protect. EUDI tries to protect the user.
Lists of trusted software is fundamental when you are trying to protect the
user. Government officials are recommended to use the Signal App and not
any app claiming to use the OTR protocol. The Tor project recommends users
to use the Tor browser and explicitly states "Using Tor with other browsers
is dangerous and not recommended”. The EUDI DOES NOT try to protect the
verifiers. Verifiers do not learn which wallet the user is using and the
EUDI ARF explicitly prohibits this (see in Annex 2 of ARF "A Wallet Unit
SHALL present a WUA only to a PID Provider or Attestation Provider, as part
of the issuance process of a PID or a key-bound attestation, and not to a
Relying Party or any other entity.”) Best, Nikos ---> Nikos, Does the EUDI
protect the user by allowing a remote signing flow that allows a
potentially malicious actor within the (Q)TSP use the privateKey
representing you (not extract... use) to sign and fabricate any history
they want that would remain verifiable in court, while making the local
QSCD (Qualified Signature Creation Device) require a weird certificate
instead of verifiable software behaviour with information only bound to a
item you control by default that probably won't have a convinient API
widely available (HUGE ASSUMPTION ON THE AVAILABILITY) resulting in remote
signing being the default, and what is worse is that you as an individual
cannot contribute to the durability of items required to verify your
claims. What you are also missing is that you as a user are in the role of
a verifier. Regards, Jori
Stopped thinking
To elaborate. What you are also missing is that in systems that respect the
you the user also give you the role of a verifier when there are things you
need to trust.
Thought for 2m 25s

Here’s a tighter, more “EU-text-backed” version you can send. I kept your
core argument and anchored the contentious parts to the actual *eIDAS /
eIDAS2* wording, plus the EUDI ARF requirement you cited.
------------------------------

Nikos,

I agree with you on the *goal* (protecting the user), but I think you’re
mixing two different security questions:

   1.

   *Protecting the user from malicious software / phishing / spoofed
   wallets*
   2.

   *Protecting other parties (and the user-as-a-party) from claims that are
   “legally true” even if the signing environment was compromised or misused*

The European Digital Identity Wallet framework is explicitly user-centric:
it is meant to provide secure access to services “*while having full
control over their data*,” and the wallet must let the user
request/obtain/store/present data “*under the sole control of the user*.”
It also bakes in anti-tracking requirements (no transaction
tracking/linking/correlation unless explicitly authorised by the user).

So yes: *lists/certification are a reasonable tool for user protection* in
that model (the EU even mandates publication of a list of *certified* EUDI
Wallets).
And the ARF goes further in privacy terms: it says a Wallet Unit must
release a WUA only to a PID Provider or Attestation Provider—*not to a
Relying Party or any other entity*.

But none of that answers the *other* question: *what protects against
misuse of signing capability—especially in remote signing models—when the
verifier (and the user) cannot independently validate how the key was used?*

eIDAS is extremely clear about the invariant it wants: advanced signatures
must be created using signature-creation data that the signatory can
use “*under
his sole control*.”
And Annex II requires that signature-creation data “*can be reliably
protected by the legitimate signatory against use by others*.”

Now the important bit: the amended eIDAS text explicitly contemplates
*remote* qualified signature creation devices and services. It states that
generating/managing signature-creation data (and even duplicating it for
backup) is carried out *on behalf of the signatory* and *by a qualified
trust service provider* providing a qualified service for managing a
*remote* QSCD.
That is a legal/assurance model that fundamentally depends on the QTSP’s
controls being correct—because neither the relying party nor the signatory
can typically *cryptographically* verify the QTSP’s internal enforcement in
the way they can verify a signature.

Yes, the regulation piles on supervision (audits at least every 24 months,
supervisory powers, breach notification “*no later than 24 hours*,” etc.).
But that’s *compliance assurance*, not *cryptographic verifiability*.

Finally, the part I think you’re missing:

*In systems that “respect the user,” the user is not only protected—they’re
also put in the role of a verifier whenever something must be trusted.* The
eIDAS2 wallet text actually reflects that: Member States must provide
validation mechanisms that allow *users* to verify the
authenticity/validity of relying parties, and the wallet must provide
logs/dashboards so the user can see who they interacted with.

So when we talk about “trusted software lists,” we should be precise: they
can help protect the *user* from bad clients—but they do not magically make
the client “trustworthy,” and they do not resolve the
verifier/user-verifier problem in remote signing trust models.

Regards,
Jori
------------------------------------------------------------------------------------------------------

There is no end-to-end verifiable way ensure the integrity of a remote
signing service. Audits don't help when they can write the history.

Upon reading the AI refined answer. I think it is obvious the *current
implementations break EU LAW!!*

The you cannot trust the client principle applies here!

The individuals device is there server relying on trusted behaviour from a
"client" for wich there is no technically valid way to ever truly
guarantee, as demonstrated by the CURL discussion.

Regards,
Jori


ma 16.2.2026 klo 9.42 NIKOLAOS FOTIOY (fotiou@aueb.gr) kirjoitti:

>
> >
> > More dangerous is the fact that your advocacy creates a false sense of
> security, literally telling people something is secure when it is not.
> Seriously, your email here is a dangerous recommendation. For anyone
> reading, please DO NOT think that approved browser lists actually prevent
> "unapproved" browser access.
> >
> > The truism that you can't trust the client is not just a web phenomenon
> or my opinion; it's a deep cybersecurity principle. You might want to argue
> with me, but I suggest you do some research before arguing against the
> combined wisdom of 50+ years of cybersecurity experience.
> >
> > Seriously, search for "cybersecurity can't trust the client" and you'll
> see a wealth of diverse opinions explaining in various terms why you
> actually can't trust the client in cyberspace.
> >
> >
>
> All boils down to who you want to protect. EUDI tries to protect the user.
> Lists of trusted software is fundamental when you are trying to protect the
> user.  Government officials are recommended to use the Signal App and not
> any app claiming to use the OTR protocol. The Tor project recommends users
> to use the Tor browser and explicitly states "Using Tor with other browsers
> is dangerous and not recommended”.
>
> The EUDI DOES NOT try to protect the verifiers. Verifiers do not learn
> which wallet the user is using and the EUDI ARF explicitly prohibits this
> (see in Annex 2 of ARF "A Wallet Unit SHALL present a WUA only to a PID
> Provider or Attestation Provider, as part of the issuance process of a PID
> or a key-bound attestation, and not to a Relying Party or any other
> entity.”)
>
> Best,
> Nikos

Received on Monday, 16 February 2026 08:22:51 UTC