Re: Utah State-Endorsed Digital Identity (SEDI) legislation

I'm being way too harsh there the assurance is high just not as high it
could be. Do not interpret word for word, please🙏

ma 16.2.2026 klo 12.08 Jori Lehtinen (lehtinenjori03@gmail.com) kirjoitti:

> Does the physical hardware structure specifically prevent usage unless
> data plausably only the user would know with *tls on the hardware level*?
> And if I guess right data and send it does the signature go trough?
>
> If either the data transmitted from a user device and encrypted is not
> decrypted only at the hardware / firmware level
>
> or
>
> I can spoof or replay or hijack the SAD even in theory.
>
> Then the assurance for *sole-control *is weak, and not logically ever
> true.
>
> There are gaps in the decentralized model, but they are scoped to an
> individuals mistake of losing a physical item.
>
> Again I do not care if this exists as long as other models with
> higher-assurance are accepted.
>
> ma 16.2.2026 klo 12.00 Steffen Schwalm (Steffen.Schwalm@msg.group)
> kirjoitti:
>
>> Nope, see Sections 7.4 and 7.5 of ETSI EN 319 401., Section 4.2.1, 6.4.2,
>> 6.4.5, 6.5.2, 6.5.5, 6.5.7, 6.8.4 of ETSI EN 319 411-1
>>
>>
>> ------------------------------
>> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
>> *Gesendet:* Montag, 16. Februar 2026 10:55
>> *An:* Steffen Schwalm <Steffen.Schwalm@msg.group>
>> *Cc:* Amir Hameed <amsaalegal@gmail.com>; NIKOLAOS FOTIOY <fotiou@aueb.gr>;
>> Joe Andrieu <joe@legreq.com>; Kyle Den Hartog <kyle@pryvit.tech>; Adrian
>> Gropper <agropper@healthurl.com>; Manu Sporny <msporny@digitalbazaar.com>;
>> Filip Kolarik <filip26@gmail.com>; public-credentials <
>> public-credentials@w3.org>
>> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>>
>>
>> *Caution:* This email originated from outside of the organization.
>> Despite an upstream security check of attachments and links by Microsoft
>> Defender for Office, a residual risk always remains. Only open attachments
>> and links from known and trusted senders.
>> Yeah you are right the definition of  QTSP in  ETSI EN 319 401 #sdction
>> 7 and ETSI EN 319 411-1 does not have access. You are right what I said
>> was wrong. Any individual with a capability to the very exact same physical
>> hardware the QTSP uses has write access to everything.
>>
>> ma 16.2.2026 klo 11.52 Steffen Schwalm (Steffen.Schwalm@msg.group)
>> kirjoitti:
>>
>> The QTSP has no write access to everything, sorry. See ETSI EN 319 401
>> #sdction 7 and ETSI EN 319 411-1
>>
>>
>> ------------------------------
>> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
>> *Gesendet:* Montag, 16. Februar 2026 10:50
>> *Bis:* Steffen Schwalm <Steffen.Schwalm@msg.group>
>> *Cc:* Amir Hameed <amsaalegal@gmail.com>; NIKOLAOS FOTIOY <fotiou@aueb.gr>;
>> Joe Andrieu <joe@legreq.com>; Kyle Den Hartog <kyle@pryvit.tech>; Adrian
>> Gropper <agropper@healthurl.com>; Manu Sporny <msporny@digitalbazaar.com>;
>> Filip Kolarik <filip26@gmail.com>; public-credentials <
>> public-credentials@w3.org>
>> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>>
>> *Caution:* This email originated from outside of the organization.
>> Despite an upstream security check of attachments and links by Microsoft
>> Defender for Office, a residual risk always remains. Only open attachments
>> and links from known and trusted senders.
>> The key thing here is the QTSP environment has write access to everything
>> that is supposed to be used as auditing material.
>>
>> This is an invariant you cannot deny, they would not be able to execute
>> any operations if they could not execute all operations any similar
>> hardware is capable of.
>>
>> ma 16.2.2026 klo 11.48 Jori Lehtinen (lehtinenjori03@gmail.com)
>> kirjoitti:
>>
>> Model used in the following two reports ChatGPT 5.2 Extended Thinking  +
>> Web Search.
>> Full ChatGPT conversationcincluding messages prior to the two reports:
>> https://chatgpt.com/share/69921d1f-c49c-8009-8df6-43267f8f818b
>>
>> --------------------------------------------------------------------------------------------------
>>
>> Below is a threat model you can paste into the thread. It’s written to
>> stay *neutral* and to cite *EU-recognized legal text + referenced
>> standards* (i.e., the stuff that actually matters “in the eyes of EU
>> legislation”).
>> ------------------------------
>> Threat model: malicious (Q)TSP / insider in remote QES (remote QSCD /
>> server signing) Scope
>>
>> We model a *Qualified Electronic Signature (QES)* created using a *remote
>> Qualified Signature Creation Device (remote QSCD)* operated by a *
>> Qualified Trust Service Provider ((Q)TSP/QTSP)*, where signing is
>> triggered via a web portal / remote API and protected (in theory) by strong
>> user authentication and *Signature Activation Data (SAD)*. Remote
>> signing is explicitly contemplated by the eIDAS framework provided it
>> achieves an equivalent security level and keeps signing “under the sole
>> control of the signatory.”
>> Security property the framework is trying to guarantee
>>
>> eIDAS ties legitimacy of advanced/qualified signing to a “sole control”
>> concept:
>>
>>    -
>>
>>    Advanced signature creation data must be usable by the signatory *under
>>    their sole control* (high confidence).
>>    -
>>
>>    QSCD requirements include that signature-creation data can be *reliably
>>    protected against use by others*.
>>    Remote signing is allowed *if* those properties are preserved by the
>>    remote QSCD + procedures.
>>
>> Assets
>>
>>    1.
>>
>>    *Signature-creation data* (private key material, typically
>>    non-exportable inside the QSCD/HSM)
>>    2.
>>
>>    *SAD / activation evidence* used to authorize each signing operation
>>    (what proves “the user meant it”)
>>    3.
>>
>>    *Audit logs / event history* (portal logs, signing records,
>>    timestamps, etc.)
>>    4.
>>
>>    *Qualified certificate + validation material* (public key, chain,
>>    revocation status, trust anchors)
>>
>> Trust boundaries (who must be trusted, vs what can be verified)
>>
>>    -
>>
>>    A relying party can cryptographically verify “this signature matches
>>    this certificate.”
>>    -
>>
>>    But in remote signing, the relying party generally *cannot
>>    cryptographically verify* whether *SAD was genuinely user-controlled* vs.
>>    manufactured/abused inside the QTSP boundary; that becomes an
>>    *assurance/compliance* question.
>>    This is exactly why the framework leans heavily on certification +
>>    supervision + liability controls.
>>
>> ------------------------------
>> Adversary
>>
>> *Malicious QTSP*, or an insider / compromised operator inside the QTSP
>> environment, with the ability to:
>>
>>    -
>>
>>    Run or modify the signing portal / authorization service,
>>    -
>>
>>    Call the signing interface that the remote QSCD/HSM exposes,
>>    -
>>
>>    Access or rewrite internal logs,
>>    -
>>
>>    Potentially issue/replace certificates (depending on how roles are
>>    deployed in that QTSP).
>>
>> This is the “evil root operator” model—strong, but realistic to analyze
>> because the whole remote model concentrates power.
>> ------------------------------
>> Attack A: “Sign without the human” (unauthorized use of the signing key)
>>
>> *Goal:* produce a perfectly valid QES over arbitrary data *without the
>> signatory’s consent*, by causing the remote QSCD to sign.
>>
>> *Mechanism (high-level):*
>>
>>    1.
>>
>>    The QTSP (or attacker inside it) submits signing requests to the
>>    remote QSCD/HSM interface.
>>    2.
>>
>>    The only intended “hard stop” is that the QSCD should require *SAD* (and
>>    whatever authentication ceremony produces it) for each signing operation.
>>    Remote signing standards explicitly define SAD-style activation.
>>    3.
>>
>>    If the attacker can *bypass* SAD enforcement *or* can *mint/obtain
>>    SAD* without the user (because the SAD issuance/validation is within
>>    the same compromised administrative domain), they can generate signatures
>>    that are:
>>    -
>>
>>       cryptographically valid,
>>       -
>>
>>       certificate-valid,
>>       -
>>
>>       and externally indistinguishable from legitimate signatures.
>>
>> *Why this matters legally/assurance-wise:*
>> This attack—if possible—directly contradicts the “sole control” and
>> “protected against use by others” requirements the regulation associates
>> with advanced/QES and QSCDs.
>>
>> *What the framework uses to prevent/deter it (not “magic,” but the actual
>> levers):*
>>
>>    -
>>
>>    *QSCD certification / evaluation against recognized standards.* The
>>    EU has an implementing decision that lists standards for QSCD security
>>    assessment (commonly referenced in practice around the CEN 419 241 family).
>>    -
>>
>>    *Standardized activation protocols.* The EU has an implementing
>>    regulation listing reference standards for remote QSCD services, including *ETSI
>>    TS 119 431-1* (signature activation protocol).
>>    -
>>
>>    *Policy/security requirements for server signing components.* ETSI TS
>>    119 432 is assessed in EU interoperability contexts (CAMSS), reflecting its
>>    relevance to regulated remote signing system design.
>>
>> *Residual risk (the key point):*
>> Even if those standards are followed, the remote model still creates a
>> structural dependency: outsiders verify the signature, but must *trust* that
>> the QTSP-operated activation path really enforced user control. That’s
>> fundamentally harder to make *end-to-end cryptographically
>> self-authenticating* than a signer-controlled device model.
>> ------------------------------
>> Attack B: “Rewrite history” (log fabrication / selective disclosure)
>>
>> *Goal:* make a false narrative of what happened (or didn’t happen)
>> appear consistent and “audit-ready.”
>>
>> *Mechanism:*
>>
>>    1.
>>
>>    Attacker generates unauthorized signatures (Attack A) and/or
>>    selectively signs only some events.
>>    2.
>>
>>    Attacker rewrites portal logs / signing transaction records to match
>>    the story they want (or to remove evidence).
>>    3.
>>
>>    If challenged, they present internally consistent records.
>>
>> *Why detection is hard:*
>> Because the *signature* validates, disputes collapse onto *process
>> evidence* (“was SAD actually user-controlled at that moment?”), which is
>> largely inside the QTSP boundary.
>>
>> *What the framework does about it (again: governance tools):*
>>
>>    -
>>
>>    Mandatory breach notification obligations (including within *24 hours* after
>>    awareness for significant impact), which is intended to force disclosure
>>    when integrity is compromised.
>>    -
>>
>>    Recurring audits (at least *every 24 months*) by a conformity
>>    assessment body, plus supervisory powers to intervene.
>>
>> These controls are meaningful, but they are not the same thing as a
>> cryptographic impossibility of log rewriting.
>> ------------------------------
>> Attack C: “Validation-material control plane abuse” (trust/verification
>> material)
>>
>> This is about where verifiers get the data needed to validate signatures,
>> and who can manipulate that ecosystem.
>> Where verification material is hosted in today’s eIDAS model
>>
>>    -
>>
>>    *Qualified certificates* (public key + identity bindings) are issued
>>    by a CA/QTSP and are typically carried with the signature container and/or
>>    retrievable via CA infrastructure.
>>    -
>>
>>    *Revocation status* is provided via CA endpoints (CRLs/OCSP), also
>>    within the trust-service ecosystem.
>>    -
>>
>>    *Trust anchors for “qualified status”* are distributed via *Trusted
>>    Lists (TL)* per Member State and the EU “List of the Lists” (LOTL)
>>    mechanism, standardized for machine processing.
>>
>> So: the crypto verification is “mathematical,” but the *authorization of
>> who counts as qualified* is a directory/governance control plane.
>> ------------------------------
>> Why this threat model is relevant to the “trusted software list” debate
>>
>> Even if the wallet ecosystem tries to protect users via certified wallet
>> software, the EUDI ARF simultaneously contains privacy rules that avoid
>> revealing wallet-unit attestation to relying parties (e.g., WUA only shown
>> to PID/attestation providers during issuance, not to relying parties).
>> So relying parties often won’t get a strong software-attestation handle
>> anyway—and your core concern stays: remote signing safety depends on what
>> happens in a domain the user and verifier can’t directly measure.
>> ------------------------------
>> Minimal, standards-aligned statement of the concern
>>
>> Remote QES can be compliant *on paper* because it is anchored in *certification
>> + supervision + liability*, but a “malicious QTSP” threat model
>> highlights a structural limit: *a valid signature cannot, by itself,
>> prove the signatory’s activation/intent if the activation ceremony is
>> administered inside the same domain that can sign and rewrite logs*. The
>> framework mitigates via audited controls and standardized activation
>> protocols, not by giving verifiers an end-to-end cryptographic proof of
>> user-controlled activation.
>> ------------------------------
>>
>> If you want to sharpen the knife further (without becoming unfair), the
>> next move is to phrase a single crisp question back to Steffen/Nikos:
>>
>> *“Which specific evaluated boundary guarantees that SAD cannot be
>> generated or replayed by the QTSP itself, and what is the verifiable
>> artifact a relying party (or the user) can retain to prove that boundary
>> was enforced for a given signature?”*
>> That question forces the discussion onto *mechanisms and artifacts*, not
>> vibes.
>>
>> ------------------------------------------------------------------------------------------------------------
>> Report: Why “low-assurance” client allowlists + remote signing can
>> undercut eIDAS/EUDI’s stated goals 1) Scope and claim being tested
>>
>> This report examines a specific tension in the EU’s trust stack:
>>
>>    -
>>
>>    *Policy goal (high level):* legally reliable, cross-border digital
>>    trust that is *user-centric* and *secure*.
>>    -
>>
>>    *Implementation pattern (practical):* (a) *trusted software /
>>    certification / allowlists* and (b) *remote signing via QTSP-operated
>>    infrastructure*, defended as “protecting the user”.
>>
>> The core question: * If the system’s threat model includes client
>> impersonation and insider misuse, do “lists + audits + certification”
>> provide the kind of assurance the legislation is trying to achieve—or do
>> they create a dangerous illusion of assurance?*
>> ------------------------------
>> 2) What eIDAS actually demands (the invariants that matter) A. “Sole
>> control” is not optional
>>
>> eIDAS defines an *advanced electronic signature* as one created using
>> signature-creation data that the signatory can use *“under his sole
>> control.”*
>>
>> Likewise, the QSCD requirements in *Annex II* include that
>> signature-creation data *can be reliably protected by the legitimate
>> signatory against use by others.*
>>
>> These are not “nice-to-haves”; they’re the *mechanical* trust claims
>> that justify legal effect.
>> B. Remote signing is explicitly contemplated—*but it must still satisfy
>> “sole control”*
>>
>> The consolidated eIDAS text explicitly talks about *remote* qualified
>> signature creation and the need to ensure the signatory remains in sole
>> control, even when things happen “remotely.”
>>
>> That matters because it sets up the exact failure mode you’re arguing
>> about:
>>
>> Remote signing is * permitted*, but it creates a sharp question: *how
>> does anyone (including the user) validate “sole control” in a way that
>> isn’t just “trust the provider + the auditors”?*
>>
>> ------------------------------
>> 3) What the EU stack uses as assurance mechanisms (and where “low
>> assurance” sneaks in) A. Governance assurance: supervision, incident
>> notification, periodic assessment
>>
>> The model relies heavily on:
>>
>>    -
>>
>>    *Security obligations on QTSPs* (risk management, incident handling,
>>    etc.), and *notification duties* for breaches.
>>    -
>>
>>    *Conformity assessment* and “qualified” status backed by standards
>>    and auditing.
>>
>> This is real assurance—*but it’s organizational / procedural assurance*.
>> B. Standards acknowledgement: the system is “standards-based,” including
>> remote-signing standards
>>
>> The Commission’s implementing rules enumerate technical standards that
>> are recognized for qualified services, including standards that cover *remote
>> QSCD / remote signature creation device management services* and related
>> protocols.
>>
>> This is key to your argument: *the EU doesn’t merely tolerate remote
>> signing; it standardizes around it.*
>> C. Wallet privacy design (relevant because it shows the user is treated
>> as an active trust boundary)
>>
>> The EUDI ARF high-level requirements include mechanisms to reduce
>> correlation—e.g., per-relying-party presentation behaviors for
>> PIDs/attestations/WUAs.
>>
>> This supports your framing that the *user is part of the verification
>> perimeter*, not just a passive subject.
>> ------------------------------
>> 4) Threat model focused on the disputed risk Assets
>>
>>    1.
>>
>>    *A legally-effective signature* (QES / AdES) tied to an identity and
>>    a transaction.
>>    2.
>>
>>    *Evidence of user intent/consent* for that signature event.
>>    3.
>>
>>    *The long-term verifiability story* (what can be shown later to
>>    auditors/courts/users).
>>
>> Adversaries (the uncomfortable but necessary ones)
>>
>>    -
>>
>>    *Client impersonators* (software that looks like an “approved”
>>    client).
>>    -
>>
>>    *Compromised wallet device / malware* (steals session context,
>>    coerces signing).
>>    -
>>
>>    *QTSP insider / compromised operator plane* (can trigger signing
>>    operations using legitimate infrastructure).
>>    -
>>
>>    *Governance failure* (slow detection, incomplete logs, audit gaps).
>>
>> Attack class you’re pointing at (high level, non-operational)
>>
>> Remote signing can fail *without key extraction*:
>>
>>    -
>>
>>    The private key stays inside certified hardware, *but the system
>>    still performs signatures whenever the service’s software path authorizes
>>    it*.
>>    -
>>
>>    If an insider or compromise can cause the service to authorize “one
>>    more signing event,” you get a signature that is *cryptographically
>>    valid* and *legally meaningful*—even if the user never intended it.
>>
>> That is precisely the kind of failure that “you can’t trust the client”
>> warns about, except here the “client” might be:
>>
>>    -
>>
>>    the wallet runtime asking for a remote signature,
>>    -
>>
>>    or the internal service component asserting that the user authorized
>>    the signing.
>>
>> ------------------------------
>> 5) The contradiction: where “lists + audits” don’t meet the legislative
>> *goal-shaped* security claim
>>
>> Here’s the clean logical separation:
>> (1) The legislation’s invariant is *cryptographic*: “sole control”
>>
>> eIDAS defines “sole control” as part of what makes an advanced signature
>> advanced.
>> Annex II requires protection against use by others.
>>
>> Those read like *technical* guarantees, not merely “we investigated and
>> think it’s fine.”
>> (2) The remote-signing reality makes “sole control” mostly
>> *non-verifiable* to external parties
>>
>> A relying party can verify a signature mathematically.
>> But it generally *cannot verify* (from the signature alone) that:
>>
>>    -
>>
>>    the user saw the document,
>>    -
>>
>>    the user approved *that* exact payload,
>>    -
>>
>>    the signature activation event wasn’t coerced or fabricated upstream.
>>
>> So “sole control” becomes:
>>
>>    -
>>
>>    *a compliance claim* (“the provider’s process enforces it”), backed
>>    by certification and audits,
>>    not
>>    -
>>
>>    *an end-to-end verifiable property* (provable by artifacts the user
>>    and relying party can independently validate).
>>
>> That’s the crux: *the law’s trust claim is technical, but the proof is
>> largely institutional.*
>> (3) Trusted-software lists are a *category error* if used as a security
>> boundary
>>
>> Allowlisting “approved” wallets/browsers can reduce *accidental* harm
>> and raise baseline hygiene.
>> But as a security boundary, it fails against impersonation and
>> modification—because “approved client behavior” is not a physical law, it’s
>> a hope with a certificate stapled to it.
>>
>> In other words: allowlists help with “keep honest people honest”
>> problems; they do not stop a determined adversary, and they can *increase
>> systemic risk* by producing a false sense of security (“if it’s on the
>> list, it must be safe”).
>> (4) This undermines the user-centric story *in the exact place it
>> matters most: legal intent*
>>
>> The EU wants citizens to safely rely on digital trust mechanisms at
>> population scale.
>> If remote signing + allowlisting results in signatures that are:
>>
>>    -
>>
>>    cryptographically valid,
>>    -
>>
>>    institutionally certified,
>>    -
>>
>>    but *not independently attributable to user intent* in a way the user
>>    can later defend without trusting the provider’s internal story,
>>
>> then the system is drifting away from the *spirit* of “sole control”
>> even if it remains inside the *letter* of standards-based compliance.
>>
>> That’s how you get “secure on paper, spooky in court.”
>> ------------------------------
>> 6) What would make it align better (without banning remote signing
>> outright)
>>
>> These are design patterns that convert “institutional assurance” into *verifiable
>> assurance*:
>>
>>    1.
>>
>>    *User-held authorization artifact* for every remote signature
>>    The user should receive a receipt they can store independently: a
>>    signed record binding:
>>    -
>>
>>       what was signed (hash),
>>       -
>>
>>       when,
>>       -
>>
>>       under what policy,
>>       -
>>
>>       using what credential,
>>       -
>>
>>       and ideally a user-controlled confirmation factor.
>>       2.
>>
>>    *User-controlled, non-exportable second factor that is outside the
>>    QTSP plane*
>>    Example shape: the QTSP can’t trigger signing unless a
>>    user-controlled authenticator approves a transaction-bound challenge.
>>    This shifts “sole control” from “trust us” toward “prove it.”
>>    3.
>>
>>    *Transparency / append-only logging that the user can independently
>>    monitor*
>>    If the only audit trail lives inside the same organization that can
>>    misbehave, your audit trail is a bedtime story.
>>    4.
>>
>>    *Narrow remote signing to cases where it is demonstrably necessary*
>>    Treat remote signing as an exception path with stronger,
>>    user-verifiable artifacts—not the default path.
>>
>> ------------------------------
>> 7) Bottom line (the argument you can safely make in the thread)
>>
>>    -
>>
>>    eIDAS defines advanced signatures using a *technical* invariant:
>>    signature creation under the signatory’s *sole control*.
>>    -
>>
>>    The framework explicitly supports remote signing, but that makes
>>    “sole control” *hard to validate end-to-end* without relying on
>>    institutional controls.
>>    -
>>
>>    The recognized assurance stack leans heavily on *standards +
>>    certification + audits*, including for remote QSCD management
>>    services.
>>    -
>>
>>    Those mechanisms can be good governance, but they are *not
>>    cryptographic hard stops*—and therefore can conflict with the
>>    *goal-shaped* promise that the user truly remains in control,
>>    especially against insider/compromise threat classes.
>>    -
>>
>>    The EUDI ARF’s privacy requirements reinforce that the user is an
>>    active trust boundary—so treating the user as *not a verifier* is
>>    inconsistent with the overall architecture direction.
>>
>> That’s a coherent, EU-text-anchored critique: *the system risks
>> substituting “compliance assurance” for “verifiable assurance” right where
>> the law’s trust claim is strongest.*
>>
>> (And yes, reality is annoying like that. Cryptography is the math of
>> mistrust; regulation is the sociology of mistrust. When they disagree,
>> someone eventually gets a very expensive lawyer.)
>>
>> ----------------------------------------------------------------------------------------
>>
>> There is this term called *zero-trust *where trust is established only
>> from what can be logically verified.
>>
>> If the European Union is fine with systems architecture that has logical
>> gaps where individual protections cannot be cryptographically verified then
>> someone let me know and I wont push this issue any further.
>>
>> I would still like to take time to craft and present a parallel model
>> with better individual protection that could co-exists with the current
>> models. But allow a better market for trust services and respect for
>> individuals digital-sovereignty.
>>
>> ma 16.2.2026 klo 11.01 Steffen Schwalm (Steffen.Schwalm@msg.group)
>> kirjoitti:
>>
>> The CEN EN is no law. So again Please explain your attack alongside the
>> standard as you assume a behaviour which is not comprehensible knowing the
>> standard
>>
>> Thx
>> ------------------------------
>> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
>> *Gesendet:* Montag, 16. Februar 2026 09:59
>> *An:* Amir Hameed <amsaalegal@gmail.com>
>> *Cc:* Steffen Schwalm <Steffen.Schwalm@msg.group>; NIKOLAOS FOTIOY <
>> fotiou@aueb.gr>; Joe Andrieu <joe@legreq.com>; Kyle Den Hartog
>> <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com>; Manu Sporny
>> <msporny@digitalbazaar.com>; Filip Kolarik <filip26@gmail.com>;
>> public-credentials <public-credentials@w3.org>
>> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>>
>>
>> *Caution:* This email originated from outside of the organization.
>> Despite an upstream security check of attachments and links by Microsoft
>> Defender for Office, a residual risk always remains. Only open attachments
>> and links from known and trusted senders.
>> And my point still stands about the laws requirements not being a hard
>> stop for logical behaviour, a specification is not a hard stop for
>> behaviour logically possible either. If the text does not address physical
>> reality correctly then the specification based on false realities.
>>
>> ma 16.2.2026 klo 10.48 Amir Hameed (amsaalegal@gmail.com) kirjoitti:
>>
>> Hi Steve,
>>
>> Thanks for the detailed breakdown. I want to clarify that my mention of
>> Sirraya One was purely illustrative — to explain the mechanism of proving
>> DID ownership via challenge-response and registry linkage. It was never
>> intended as a product pitch, nor does it require anyone to vet or adopt our
>> implementation. The goal was to show that simple, cryptographically
>> grounded approaches can work in practice, even at scale.
>>
>> I also want to emphasize a broader point: the world is not limited to EU,
>> US, or UN-centric approaches. India alone represents billions of internet
>> users in highly diverse conditions, and our implementations are tested
>> under real-world constraints — high scale, fragmented connectivity, and
>> heterogeneous devices. Lessons from these environments often differ from
>> theoretical or lab-driven models.
>>
>> Finally, while trade cases like the ones you describe are more complex,
>> the fundamental principle holds: trust can be embedded in the cryptographic
>> links between DID, VCs, and registries. Multi-party verification is an
>> extension of the same mechanism, not a separate conceptual challenge.
>>
>> I’m happy to continue discussing use cases and mechanisms, but my
>> intention here is to focus on practical, field-tested approaches, not
>> organizational validation or alignment.
>>
>>
>> Regards,
>>
>> Amir
>>
>>
>> On Mon, 16 Feb 2026 at 2:13 PM, Steffen Schwalm <Steffen.Schwalm@msg.group>
>> wrote:
>>
>> CEN EN 419 241 is no law but the technical specification according to
>> which the QSCD is build and implemented
>>
>> Means you still lack of valid explanation alongside the standard how your
>> proposed attack should work
>>
>> Best
>> Steffen
>>
>>
>> ------------------------------
>> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
>> *Gesendet:* Montag, 16. Februar 2026 09:38
>> *Bis:* Steffen Schwalm <Steffen.Schwalm@msg.group>
>>
>> *Cc:* NIKOLAOS FOTIOY <fotiou@aueb.gr>; Joe Andrieu <joe@legreq.com>;
>> Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <
>> agropper@healthurl.com>; Manu Sporny <msporny@digitalbazaar.com>; Filip
>> Kolarik <filip26@gmail.com>; public-credentials <
>> public-credentials@w3.org>
>> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>>
>> *Caution:* This email originated from outside of the organization.
>> Despite an upstream security check of attachments and links by Microsoft
>> Defender for Office, a residual risk always remains. Only open attachments
>> and links from known and trusted senders.
>> Law's requirements ARE NOT A LOGICAL STOP OF BEHAVIOUR.
>>
>> LAW IS NOT A COMPONENT FOR TECHNICAL THREATH MODELLING.
>>
>> But I think I have shown enough. Anyone can do with it what they want.
>> Next I will work on a solution that actually fulfills the goals the EU
>> Legislation has👍
>>
>> ma 16.2.2026 klo 10.35 Steffen Schwalm (Steffen.Schwalm@msg.group)
>> kirjoitti:
>>
>> They can`t extract the keys without notice. See CEN EN 419241
>> ------------------------------
>> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
>> *Gesendet:* Montag, 16. Februar 2026 09:30
>> *An:* Steffen Schwalm <Steffen.Schwalm@msg.group>
>> *Cc:* NIKOLAOS FOTIOY <fotiou@aueb.gr>; Joe Andrieu <joe@legreq.com>;
>> Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <
>> agropper@healthurl.com>; Manu Sporny <msporny@digitalbazaar.com>; Filip
>> Kolarik <filip26@gmail.com>; public-credentials <
>> public-credentials@w3.org>
>> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>>
>>
>> *Caution:* This email originated from outside of the organization.
>> Despite an upstream security check of attachments and links by Microsoft
>> Defender for Office, a residual risk always remains. Only open attachments
>> and links from known and trusted senders.
>> They can's extract the keys used for signing material. There is the
>> signature only upon certain data presented. But that is software layer.
>> They can use other software to interact with the hardware.
>>
>> ma 16.2.2026 klo 10.27 Jori Lehtinen (lehtinenjori03@gmail.com)
>> kirjoitti:
>>
>> I actually spent last night reading about it, how it works and what
>> components there are there is nothing stopping the (Q)TSP from using the
>> hardware in their custody...
>>
>> ma 16.2.2026 klo 10.26 Steffen Schwalm (Steffen.Schwalm@msg.group)
>> kirjoitti:
>>
>> Jori,
>>
>> May you please alongside the CEN EN 419 241 how EUDI "llowing a remote
>> signing flow that allows a potentially malicious actor within the (Q)TSP
>> use the privateKey representing you (not extract... use) to sign and
>> fabricate any history they want that would remain verifiable in court,
>> while making the local QSCD (Qualified Signature Creation Device) require a
>> weird certificate"
>>
>> The QSCD contains verifiable hard- and software bound not only to keys
>> you control.
>>
>> It would make it much easier to discuss if you could show where exactly
>> in QSCD you see the issue.
>>
>> Thx
>>
>>
>> ------------------------------
>> *Von:* Jori Lehtinen <lehtinenjori03@gmail.com>
>> *Gesendet:* Montag, 16. Februar 2026 09:22
>> *Bis:* NIKOLAOS FOTIOY <fotiou@aueb.gr>
>> *Cc:* Joe Andrieu <joe@legreq.com>; Kyle Den Hartog <kyle@pryvit.tech>;
>> Adrian Gropper <agropper@healthurl.com>; Manu Sporny <
>> msporny@digitalbazaar.com>; Steffen Schwalm <Steffen.Schwalm@msg.group>;
>> Filip Kolarik <filip26@gmail.com>; public-credentials <
>> public-credentials@w3.org>
>> *Betreff:* Re: Utah State-Endorsed Digital Identity (SEDI) legislation
>>
>> *Caution:* This email originated from outside of the organization.
>> Despite an upstream security check of attachments and links by Microsoft
>> Defender for Office, a residual risk always remains. Only open attachments
>> and links from known and trusted senders.
>> Nikos,
>>
>> Does the EUDI protect the user by allowing a remote signing flow that
>> allows a potentially malicious actor within the (Q)TSP use the privateKey
>> representing you (not extract... use) to sign and fabricate any history
>> they want that would remain verifiable in court, while making the local
>> QSCD (Qualified Signature Creation Device) require a weird
>> certificate instead of verifiable software behaviour with information only
>> bound to a item you control by default that probably won't have a
>> convinient API widely available (HUGE ASSUMPTION ON THE AVAILABILITY)
>> resulting in remote signing being the default, and what is worse is that
>> you as an individual cannot contribute to the durability of items required
>> to verify your claims.
>>
>>
>> --------------------------------------------------------------------------------------------------
>>
>> Model GPT 5.2 Extended Thinking + Web Search
>> AI Refined answer below and source here:
>> https://chatgpt.com/share/6992d17b-8af4-8009-abad-c4b6d66e5909
>> <https://chatgpt.com/share/6992d17b-8af4-8009-abad-c4b6d66e5909>
>>
>> What you are also missing is that you as a user are in the role of a
>> verifier.
>>
>> Help me strenghten this response with references to the EU legislation: >
>> > More dangerous is the fact that your advocacy creates a false sense of
>> security, literally telling people something is secure when it is not.
>> Seriously, your email here is a dangerous recommendation. For anyone
>> reading, please DO NOT think that approved browser lists actually prevent
>> "unapproved" browser access. > > The truism that you can't trust the client
>> is not just a web phenomenon or my opinion; it's a deep cybersecurity
>> principle. You might want to argue with me, but I suggest you do some
>> research before arguing against the combined wisdom of 50+ years of
>> cybersecurity experience. > > Seriously, search for "cybersecurity can't
>> trust the client" and you'll see a wealth of diverse opinions explaining in
>> various terms why you actually can't trust the client in cyberspace. > >
>> All boils down to who you want to protect. EUDI tries to protect the user.
>> Lists of trusted software is fundamental when you are trying to protect the
>> user. Government officials are recommended to use the Signal App and not
>> any app claiming to use the OTR protocol. The Tor project recommends users
>> to use the Tor browser and explicitly states "Using Tor with other browsers
>> is dangerous and not recommended”. The EUDI DOES NOT try to protect the
>> verifiers. Verifiers do not learn which wallet the user is using and the
>> EUDI ARF explicitly prohibits this (see in Annex 2 of ARF "A Wallet Unit
>> SHALL present a WUA only to a PID Provider or Attestation Provider, as part
>> of the issuance process of a PID or a key-bound attestation, and not to a
>> Relying Party or any other entity.”) Best, Nikos ---> Nikos, Does the EUDI
>> protect the user by allowing a remote signing flow that allows a
>> potentially malicious actor within the (Q)TSP use the privateKey
>> representing you (not extract... use) to sign and fabricate any history
>> they want that would remain verifiable in court, while making the local
>> QSCD (Qualified Signature Creation Device) require a weird certificate
>> instead of verifiable software behaviour with information only bound to a
>> item you control by default that probably won't have a convinient API
>> widely available (HUGE ASSUMPTION ON THE AVAILABILITY) resulting in remote
>> signing being the default, and what is worse is that you as an individual
>> cannot contribute to the durability of items required to verify your
>> claims. What you are also missing is that you as a user are in the role of
>> a verifier. Regards, Jori
>> Stopped thinking
>> To elaborate. What you are also missing is that in systems that respect
>> the you the user also give you the role of a verifier when there are things
>> you need to trust.
>> Thought for 2m 25s
>>
>> Here’s a tighter, more “EU-text-backed” version you can send. I kept your
>> core argument and anchored the contentious parts to the actual *eIDAS /
>> eIDAS2* wording, plus the EUDI ARF requirement you cited.
>> ------------------------------
>>
>> Nikos,
>>
>> I agree with you on the *goal* (protecting the user), but I think you’re
>> mixing two different security questions:
>>
>>    1.
>>
>>    *Protecting the user from malicious software / phishing / spoofed
>>    wallets*
>>    2.
>>
>>    *Protecting other parties (and the user-as-a-party) from claims that
>>    are “legally true” even if the signing environment was compromised or
>>    misused*
>>
>> The European Digital Identity Wallet framework is explicitly
>> user-centric: it is meant to provide secure access to services “*while
>> having full control over their data*,” and the wallet must let the user
>> request/obtain/store/present data “*under the sole control of the user*.”
>> It also bakes in anti-tracking requirements (no transaction
>> tracking/linking/correlation unless explicitly authorised by the user).
>>
>> So yes: *lists/certification are a reasonable tool for user protection* in
>> that model (the EU even mandates publication of a list of *certified* EUDI
>> Wallets).
>> And the ARF goes further in privacy terms: it says a Wallet Unit must
>> release a WUA only to a PID Provider or Attestation Provider—*not to a
>> Relying Party or any other entity*.
>>
>> But none of that answers the *other* question: *what protects against
>> misuse of signing capability—especially in remote signing models—when the
>> verifier (and the user) cannot independently validate how the key was used?*
>>
>> eIDAS is extremely clear about the invariant it wants: advanced
>> signatures must be created using signature-creation data that the signatory
>> can use “*under his sole control*.”
>> And Annex II requires that signature-creation data “*can be reliably
>> protected by the legitimate signatory against use by others*.”
>>
>> Now the important bit: the amended eIDAS text explicitly contemplates
>> *remote* qualified signature creation devices and services. It states
>> that generating/managing signature-creation data (and even duplicating it
>> for backup) is carried out *on behalf of the signatory* and *by a
>> qualified trust service provider* providing a qualified service for
>> managing a *remote* QSCD.
>> That is a legal/assurance model that fundamentally depends on the QTSP’s
>> controls being correct—because neither the relying party nor the signatory
>> can typically *cryptographically* verify the QTSP’s internal enforcement
>> in the way they can verify a signature.
>>
>> Yes, the regulation piles on supervision (audits at least every 24
>> months, supervisory powers, breach notification “*no later than 24 hours*,”
>> etc.).
>> But that’s *compliance assurance*, not *cryptographic verifiability*.
>>
>> Finally, the part I think you’re missing:
>>
>> *In systems that “respect the user,” the user is not only
>> protected—they’re also put in the role of a verifier whenever something
>> must be trusted.* The eIDAS2 wallet text actually reflects that: Member
>> States must provide validation mechanisms that allow *users* to verify
>> the authenticity/validity of relying parties, and the wallet must provide
>> logs/dashboards so the user can see who they interacted with.
>>
>> So when we talk about “trusted software lists,” we should be precise:
>> they can help protect the *user* from bad clients—but they do not
>> magically make the client “trustworthy,” and they do not resolve the
>> verifier/user-verifier problem in remote signing trust models.
>>
>> Regards,
>> Jori
>>
>> ------------------------------------------------------------------------------------------------------
>>
>> There is no end-to-end verifiable way ensure the integrity of a remote
>> signing service. Audits don't help when they can write the history.
>>
>> Upon reading the AI refined answer. I think it is obvious the *current
>> implementations break EU LAW!!*
>>
>> The you cannot trust the client principle applies here!
>>
>> The individuals device is there server relying on trusted behaviour from
>> a "client" for wich there is no technically valid way to ever truly
>> guarantee, as demonstrated by the CURL discussion.
>>
>> Regards,
>> Jori
>>
>>
>> ma 16.2.2026 klo 9.42 NIKOLAOS FOTIOY (fotiou@aueb.gr) kirjoitti:
>>
>>
>> >
>> > More dangerous is the fact that your advocacy creates a false sense of
>> security, literally telling people something is secure when it is not.
>> Seriously, your email here is a dangerous recommendation. For anyone
>> reading, please DO NOT think that approved browser lists actually prevent
>> "unapproved" browser access.
>> >
>> > The truism that you can't trust the client is not just a web phenomenon
>> or my opinion; it's a deep cybersecurity principle. You might want to argue
>> with me, but I suggest you do some research before arguing against the
>> combined wisdom of 50+ years of cybersecurity experience.
>> >
>> > Seriously, search for "cybersecurity can't trust the client" and you'll
>> see a wealth of diverse opinions explaining in various terms why you
>> actually can't trust the client in cyberspace.
>> >
>> >
>>
>> All boils down to who you want to protect. EUDI tries to protect the
>> user. Lists of trusted software is fundamental when you are trying to
>> protect the user.  Government officials are recommended to use the Signal
>> App and not any app claiming to use the OTR protocol. The Tor project
>> recommends users to use the Tor browser and explicitly states "Using Tor
>> with other browsers is dangerous and not recommended”.
>>
>> The EUDI DOES NOT try to protect the verifiers. Verifiers do not learn
>> which wallet the user is using and the EUDI ARF explicitly prohibits this
>> (see in Annex 2 of ARF "A Wallet Unit SHALL present a WUA only to a PID
>> Provider or Attestation Provider, as part of the issuance process of a PID
>> or a key-bound attestation, and not to a Relying Party or any other
>> entity.”)
>>
>> Best,
>> Nikos
>>
>>

Received on Monday, 16 February 2026 10:10:16 UTC