AW: Utah State-Endorsed Digital Identity (SEDI) legislation

exactly
________________________________
Von: Amir Hameed <amsaalegal@gmail.com>
Gesendet: Mittwoch, 18. Februar 2026 00:02
An: Steffen Schwalm <Steffen.Schwalm@msg.group>
Cc: Jori Lehtinen <lehtinenjori03@gmail.com>; Joe Andrieu <joe@legreq.com>; NIKOLAOS FOTIOY <fotiou@aueb.gr>; Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com>; Manu Sporny <msporny@digitalbazaar.com>; Filip Kolarik <filip26@gmail.com>; public-credentials <public-credentials@w3.org>
Betreff: Re: Utah State-Endorsed Digital Identity (SEDI) legislation


Caution: This email originated from outside of the organization. Despite an upstream security check of attachments and links by Microsoft Defender for Office, a residual risk always remains. Only open attachments and links from known and trusted senders.

Steffen,

Thank you for the thoughtful engagement. I want to explicitly acknowledge that the concerns you raise are real sources of friction — particularly around key management standards, DID method fragmentation, interoperability, and infrastructure complexity.

My position is not to dismiss these issues, but to suggest that solutions likely exist if we take one step further toward synthesis rather than comparison. From my exploration, it seems possible to satisfy the operational and regulatory constraints you describe while still preserving the architectural principles motivating this work.

To clarify my perspective: I’m approaching this primarily as a builder, not to defend nation states, nor any specific standard or specification. Everyone here is understandably invested, but my focus is on what can be made to work in practice across contexts.

Considering your points, I believe the challenge is less about choosing between models and more about designing a wrapper or integration layer that:

• Accounts for real-world governance and compliance needs
• Preserves existing PKI / eIDAS / VC work efforts
• Reduces ecosystem fragmentation
• Restores meaningful user control
• Maintains healthy, ethical, privacy-preserving properties

Perhaps a constructive next step would be to summarize where we agree and where tensions remain, and then invite broader community input to help converge on approaches that are both useful and usable — for small and large actors alike.

Best,
Amir

On Tue, 17 Feb 2026 at 09:25, Steffen Schwalm <Steffen.Schwalm@msg.group> wrote:
Hi Amir,

Seems we come closer to the subject:


"4. Problems When VC Subject is NOT a DID

When subject identifiers are:

  *   Provider-issued IDs

  *   Database references

  *   Email addresses

  *   Certificate-bound identities

You typically get:

  *   Centralized dependency

  *   Identifier lock-in

  *   Difficult cross-domain portability

  *   Correlation risks controlled by issuers

  *   Complicated key lifecycle management"

  *
As we need to bind a DID to a legal identity e.g. PID in case of eIDAS to meet legal requirements we`ll have those conditions anyway because a DID is no identity
  *
As long as identity clearly defined I don`t see issue in cross-domain portability, correlation risks not key management especially not in key management as in case of eID keys typically managed in secure environment of issuer (governmental authority) also not in case of attestations of attributes as keys managed in HSMs or similar
  *
Centralized dependency yes but as DID is no identity you may have it anyway
  *
Identifier lock-in --> depends on kind of identity


"A DID provides:

Cryptographic control without central authority

The subject proves control via keys, not registry permission.

  *
No control of key length or fundamental cryptographic standards

Built-in key lifecycle and rotation

Unlike static identifiers:

  *   Keys can rotate

  *   Methods define update and recovery semantics

  *   No need to reissue credentials purely due to key rollover


  *
Same we have in classical PKI and HSM but based on standardized cryptographic and security measures

Classic PKI certificates struggle here because:

Certificate expiry does not equal identity expiry


  *
Depends on identity,, in case your Attestation of attributes or PID expire also related certificate or equivalent expires

  *   Revocation overhead is significant

     *
Experience from implementation: Running StatusLists for VC using DID or VC without DID is pretty much the same
  *   Heavy CA dependency creates bottlenecks"

     *
Which ones?

"This is similar to how different PKI hierarchies still use the same X.509 verification mathematics."


  *
Nope as x509 certificates easily provable independently from use case
  *
One DID mtheode not necessarily compatible not interoperable with another one and no standard on interoperability exists (beside CEN developing one)


"Decentralized resolution model

DID → DID Document → Keys and endpoints enables:

  *   Dynamic verification material discovery

  *   No hard coupling to issuer database"

  *
Nope in Case VC issued by centralized issuer it`s hardly bound to issuer data base
  *
We should clear differentiate Identifier & issuer of VC


Separation of concerns

DID handles control and authentication layer

  *
Depends, oin case of QTSP for QEAA in Europe the authentication issued by QTSP, the DID is meaningless



"Without DIDs, each ecosystem invents:

  *   Custom subject identifier logic

  *   Custom key binding logic

  *   Custom resolution and discovery logic"

  *
Looking at 238 DID methods one for each ecosystem we have pretty much same for DID but in unstandardized way
  *
Keybinding and subject identifier logic as well as resolution and discovery logic in eIDAS defined in European ETSI standards independently from ecoystem without DID

What remains:


Key advantages of DID

  *
Controlled by user but binding to legal identity possible
  *
Enable scalable, distributed trust registries (Trusted Issuer/Verifier Registry)
  *
Combining identiy and transaction
  *
Easier link between objects/AI Gents and legal identities


Main issue with DID:

  *
No standardized rerquirements on key management nor key length, algorithm etc
  *
Separate requirements on infrastructure needed (WebPKI, DLT etc.)
  *
238 registered methods without interoperability
  *
Binding on identity needed anyway --> so binding on centralized authority
  *
Revocation mechanism equivalent PKI needed

Best
Steffen

PS: Don`t get me wrong ther`s reason why I promote DID in eIDAS but guzess we should focus on subjects where they bring actual advantages and where only more complexity.


________________________________
Von: Amir Hameed <amsaalegal@gmail.com<mailto:amsaalegal@gmail.com>>
Gesendet: Dienstag, 17. Februar 2026 19:37
Bis: Jori Lehtinen <lehtinenjori03@gmail.com<mailto:lehtinenjori03@gmail.com>>
Cc: Steffen Schwalm <Steffen.Schwalm@msg.group>; Joe Andrieu <joe@legreq.com<mailto:joe@legreq.com>>; NIKOLAOS FOTIOY <fotiou@aueb.gr<mailto:fotiou@aueb.gr>>; Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>>; Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>>; Filip Kolarik <filip26@gmail.com<mailto:filip26@gmail.com>>; public-credentials <public-credentials@w3.org<mailto:public-credentials@w3.org>>
Betreff: Re: Utah State-Endorsed Digital Identity (SEDI) legislation


Caution: This email originated from outside of the organization. Despite an upstream security check of attachments and links by Microsoft Defender for Office, a residual risk always remains. Only open attachments and links from known and trusted senders.

Steffen,

Important points — especially the insistence on separating identifier, identity, and legal recognition. I think we largely agree, but the apparent disagreement sits in the role and necessity of DIDs.

1. "DID is not identity" — Fully Aligned

Yes.

A DID by itself is:

  *   An identifier

  *   Cryptographically controllable

  *   Semantically neutral

No claims, no attributes, no legal meaning.

Your distinction holds:

  *   DID alone → Identifier

  *   DID + VC → Contextual / functional identity

Identity is constructed via attestations, not embedded in the identifier.



2. VC Does NOT Require a DID — Correct

A Verifiable Credential fundamentally needs:

  *   Subject reference

  *   Claims

  *   Issuer signature

That subject reference can be anything:

  *   DID

  *   URI

  *   Account number

  *   Certificate DN

  *   Pairwise pseudonym

So yes — DID is optional at the VC layer.



3. Then Why Introduce DIDs at All?

This is the crucial clarification.

DIDs are not introduced because VCs cannot function without them. They are introduced because DIDs solve systemic architectural problems that emerge at scale.



4. Problems When VC Subject is NOT a DID

When subject identifiers are:

  *   Provider-issued IDs

  *   Database references

  *   Email addresses

  *   Certificate-bound identities

You typically get:

  *   Centralized dependency

  *   Identifier lock-in

  *   Difficult cross-domain portability

  *   Correlation risks controlled by issuers

  *   Complicated key lifecycle management



5. What DIDs Add (Beyond "Just Identifier")

A DID provides:

Cryptographic control without central authority

The subject proves control via keys, not registry permission.

Built-in key lifecycle and rotation

Unlike static identifiers:

  *   Keys can rotate

  *   Methods define update and recovery semantics

  *   No need to reissue credentials purely due to key rollover

Classic PKI certificates struggle here because:

  *   Certificate expiry does not equal identity expiry

  *   Revocation overhead is significant

  *   Heavy CA dependency creates bottlenecks

Decentralized resolution model

DID → DID Document → Keys and endpoints enables:

  *   Dynamic verification material discovery

  *   No hard coupling to issuer database

Separation of concerns

  *   DID handles control and authentication layer

  *   VC handles claims and semantics

  *   Trust Framework handles policy and legal meaning

Interoperability across ecosystems

Even with many DID methods, the verification model remains uniform:

  *   Resolve DID

  *   Obtain verification keys

  *   Verify proof

Without DIDs, each ecosystem invents:

  *   Custom subject identifier logic

  *   Custom key binding logic

  *   Custom resolution and discovery logic



6. "238 DID Methods Means No Interoperability"

Important nuance.

Interoperability happens at three layers:

Layer
Interoperability Status
DID Syntax
Standardized
DID Resolution
Method-specific
Cryptographic Verification
Uniform

Yes, resolution differs across methods.

But once resolved:

  *   Public keys

  *   Verification relationships

  *   Proof verification

all remain method-agnostic.

This is similar to how different PKI hierarchies still use the same X.509 verification mathematics.



7. "Why DID if Identity and Transaction Are Separated?"

Because DID is not about combining identity and transaction.

It is about:

  *   Stable subject reference

  *   User-controlled cryptographic anchor

  *   Decentralized key management

  *   Cross-domain portability

Even if:

  *   VC contains only contextual claims

  *   Identity remains minimal

  *   No correlation is desired

DIDs still provide:

  *   Pairwise DIDs for relationship-specific identifiers

  *   Disposable DIDs for ephemeral contexts

  *   Context-specific identifiers that limit linkability

This actually improves privacy, rather than harming it.



8. Your Primitive Model — Strong Observation

You wrote:

Identifier
Credential
Signature
Trust Framework

Architecturally, this is correct.

DID is simply a specialized identifier class optimized for:

  *   Cryptographic control

  *   Decentralized resolution

  *   Key lifecycle management

VC is indeed:

  *   Claims container plus signature

Registry is:

  *   Trust framework implementation

No disagreement here.



9. When DID is Genuinely Optional

If a system:

  *   Uses issuer-controlled subject IDs

  *   Has centralized key management

  *   Does not need portability across domains

  *   Does not need subject-controlled rotation

Then DID may add unnecessary complexity.

This is a perfectly valid design choice.



10. When DID Becomes Valuable

If goals include:

  *   User control over identifiers

  *   Ecosystem portability

  *   Reduced central dependency

  *   Flexible key lifecycle management

  *   Pairwise privacy strategies

Then DID becomes structurally advantageous.


DIDs are not mandatory for verifiable credentials.

They are architectural enablers for:

  *   Decentralization

  *   Portability across ecosystems

  *   User control

  *   Long-term key agility

The decision is therefore not:

"Do VCs require DIDs?"

but rather:

"What properties do we want our identifier layer to guarantee?"


Best,
Amir

On Sun, 15 Feb 2026 at 22:04, Amir Hameed <amsaalegal@gmail.com<mailto:amsaalegal@gmail.com>> wrote:

Hi Steve,

From an implementation perspective, the linkage problem is largely solvable without introducing new primitives.

A DID is already a cryptographic construct derived from open, well-understood mathematics and protocols. Once generated, it can be registered or referenced within a government or organizational trust registry.

Verification then becomes straightforward:

• During issuance, the authority records a verifiable association with the DID

• When interaction is required, ownership is proven via challenge–response

• The holder demonstrates control of the private key

• The signed challenge is validated against the registry’s stored identifier

In our Sirraya One implementation, the government (or authority) maintains the authoritative registry reference, while the subject retains control of the DID and associated VCs/capabilities.

This preserves:

✔ Subject-controlled identifiers

✔ Authority-asserted relationships

✔ Cryptographic verification of control

✔ No dependency on verifier pre-authorization

Which aligns well with the data-centric trust model you described.


Regards,


On Mon, 16 Feb 2026 at 11:23 AM, Jori Lehtinen <lehtinenjori03@gmail.com<mailto:lehtinenjori03@gmail.com>> wrote:
> Perhaps the productive path forward is:

How do we co-evolve:

• Technical architecture

• Governance frameworks

• Legal enforceability

• Usability at population scale

rather than treating sequencing as a blocker?


It is exactly that!

The bottleneck is not the technologists here so: presentation formats that involve legal text seem absolutely nessecary to get past the bottleneck at this point.

Good takes tho Amir!

Regards,

Jori


ma 16.2.2026 klo 7.44 ap. Amir Hameed <amsaalegal@gmail.com<mailto:amsaalegal@gmail.com>> kirjoitti:

Hi Jori,

I think we’re largely aligned, and I’ve held a similar position throughout.

Legal formalization is clearly necessary for legal trust — that’s an invariant in any system interacting with regulation, liability, and institutional acceptance.

Where I hesitate is the inverse argument:

that a method is “non-viable” simply because it is not yet legally formalized.

Legal frameworks evolve. Architectures shouldn’t be dismissed solely because formal recognition trails implementation — especially when those same methods can be formalized once maturity, interoperability, and assurance models stabilize.

From a technical standpoint, there seems to be broad consensus:

• Cryptographic verifiability

• Privacy preservation

• Lifecycle & revocation models

• Interoperability

• Security guarantees

The friction appears when non-technical constructs risk being elevated into hard technical requirements.

Speaking from experience working in India — an environment with billions of internet users, heterogeneous devices, intermittent connectivity, and real inclusion constraints — identity engineering is less theoretical and far more operational.

We’re building and deploying systems under:

• Scale pressures

• Infrastructure variability

• Usability constraints

• Security and fraud realities

In these “real trenches,” SSI-style models are not abstract ideals; they are practical tools for improving resilience, reducing central points of failure, and enabling privacy-preserving verification.

Legal recognition is essential — but so is allowing technical progress to inform what ultimately gets formalized.

Perhaps the productive path forward is:

How do we co-evolve:

• Technical architecture

• Governance frameworks

• Legal enforceability

• Usability at population scale

rather than treating sequencing as a blocker?


Best regards,

Amir Hameed Mir


On Mon, 16 Feb 2026 at 11:06 AM, Jori Lehtinen <lehtinenjori03@gmail.com<mailto:lehtinenjori03@gmail.com>> wrote:
I also agree and have agreed all the time.

The legal formalization is an obvious requirement for legal trust.

An invariant!

It is not helpful to say methods do not work because they are not legally formalized.

When they can be legally formalized any time….

Everyone already agrees about the technical requirements.

We have been disagreeng about including parts that include no technical basis as requirements…

I will now focus on formalizing what I think will yield the maximal:


• Deployment speed

• Governance design

• Privacy guarantees

• Recovery & lifecycle management

• Real-world adoption



ma 16.2.2026 klo 7.21 ap. Steffen Schwalm <Steffen.Schwalm@msg.group> kirjoitti:

"So perhaps the question is not SSI vs government systems, but:

How do we accelerate implementation while aligning decentralization, usability, and regulatory trust?

The meaningful debate is about:

• Deployment speed

• Governance design

• Privacy guarantees

• Recovery & lifecycle management

• Real-world adoption



Trust, ultimately, is something we build into the system’s mechanics — not something we merely assert."

  *
Exactly


________________________________
Von: Amir Hameed <amsaalegal@gmail.com<mailto:amsaalegal@gmail.com>>
Gesendet: Montag, 16. Februar 2026 06:08
Bis: Steffen Schwalm <Steffen.Schwalm@msg.group>
Cc: Joe Andrieu <joe@legreq.com<mailto:joe@legreq.com>>; NIKOLAOS FOTIOY <fotiou@aueb.gr<mailto:fotiou@aueb.gr>>; Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>>; Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>>; Filip Kolarik <filip26@gmail.com<mailto:filip26@gmail.com>>; public-credentials <public-credentials@w3.org<mailto:public-credentials@w3.org>>

Betreff: Re: Utah State-Endorsed Digital Identity (SEDI) legislation


Caution: This email originated from outside of the organization. Despite an upstream security check of attachments and links by Microsoft Defender for Office, a residual risk always remains. Only open attachments and links from known and trusted senders.

Hi all,


I’m not sure we’re framing the discussion in the most productive way.


Self-sovereign identity (SSI) is no longer purely conceptual — elements of it already exist in our day-to-day digital interactions. The core shift is not simply about what identity model we choose, but about user control, privacy, and how trust is established.


A web-of-trust perspective reminds us that trust is not something we can centrally declare or predefine. It emerges from verifiable interactions, cryptographic proofs, and governance frameworks — rather than being assumed by default.


Decentralized identifiers (DIDs), for example, allow identity to function more like a network address anchored in cryptography instead of a record anchored solely in an institution. This introduces properties such as portability, reduced correlation, and resistance to single points of control. These characteristics are not theoretical; they are already being implemented and tested in real systems.


That said, government-led frameworks like EUDI or SEDI play an important role in:

• Legal recognition

• Interoperability at scale

• Liability and assurance models

• Cross-border acceptance


So perhaps the question is not SSI vs government systems, but:


How do we accelerate implementation while aligning decentralization, usability, and regulatory trust?


The meaningful debate is about:

• Deployment speed

• Governance design

• Privacy guarantees

• Recovery & lifecycle management

• Real-world adoption


Trust, ultimately, is something we build into the system’s mechanics — not something we merely assert.


Regards,

Amir Hameed Mir

Founder of Sirraya Labs


On Mon, 16 Feb 2026 at 10:26 AM, Steffen Schwalm <Steffen.Schwalm@msg.group> wrote:
Joe,


  1.
Trusted issuer registry show somebody is allowed to issue something and trustworthy because in the registry

  *
Controls on trusted issuer work in parallel e.g.
- requirements to inform about security breaches within 24 hours
Possibility for SB to start investigation or new conformity assessment by CAB
  *
Conformity assessment based on provable international standards
  *
Clear liability for QTSP

2) Means nothing dangerous in arguments of  Nikos, it`s pretty much similar to root stores from browsers but in hands of trustworthy authorities

3) there`s no centralized but distributed system as we have > 250 QTSP, 31 TL, n CAB

So recommend that we discuss alongside the actual regulation and eiDAS technical framework.

Best
Steffen


________________________________
Von: Joe Andrieu <joe@legreq.com<mailto:joe@legreq.com>>
Gesendet: Montag, 16. Februar 2026 05:45
Bis: NIKOLAOS FOTIOY <fotiou@aueb.gr<mailto:fotiou@aueb.gr>>
Cc: Kyle Den Hartog <kyle@pryvit.tech>; Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>>; Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>>; Steffen Schwalm <Steffen.Schwalm@msg.group>; Filip Kolarik <filip26@gmail.com<mailto:filip26@gmail.com>>; public-credentials <public-credentials@w3.org<mailto:public-credentials@w3.org>>
Betreff: Re: Utah State-Endorsed Digital Identity (SEDI) legislation


Caution: This email originated from outside of the organization. Despite an upstream security check of attachments and links by Microsoft Defender for Office, a residual risk always remains. Only open attachments and links from known and trusted senders.

On Sun, Feb 15, 2026 at 1:15 PM NIKOLAOS FOTIOY <fotiou@aueb.gr<mailto:fotiou@aueb.gr>> wrote:
> “[browsers] don't have to "prove their code is secure” before engaging with a website during a regulated activity”.

This not true. Browsers have done this implicitly and many web sites trust “well-known” browsers. If you try to access a web page with an “unknown” or old browser you are denied access. Try for example "curl https://www.aa.com/“.


This is a wonderful example of how we are talking past each other.  In a previous email you also suggested curl  https://www.us.emb-japan.go.jp/itpr_en/travel_and_visa.htm


And, in fact, a simple curl request to that URL fails. Fascinating. That surprised me.

However, if you install curl-impersonate, those two URLs open up like a jack-in-the-box on the third turn of the crank.

curl_ff98 https://www.us.emb-japan.go.jp/itpr_en/travel_and_visa.html

curl_ff98 https://www.aa.com


So, I acknowledge that you are correct. Apparently, both American Airlines and the Japanese Embassy "maintain a list" of approved browsers. A useless list, but the filtering is happening.

Unfortunately, they lack a way to prevent impersonating browsers from accessing content.

The thing is, you *can* maintain a list of approved browsers, but it's not actually going to stop bad actors. At most, it will keep naive actors from taking advantage of your system.

Making matters worse, every browser with a developer mode allows current users nearly unlimited access to the browser. It's trivially hackable. The idea of a secure client is simply unrealistic.

The situation is much like the truism that "Locks don't keep thieves out; locks keep honest people honest." The only thing that "approved" lists achieve is preventing a bunch of potentially legitimate requests in exchange for the false hope that you are preventing malicious activity. You aren't actually preventing non-standard browsers from accessing your site. You're only preventing non-criminals from accessing your services in their preferred way. You think you're making things better, but you're actually preventing innovation in the client's processing context. I know. I've been fighting the Web's ineffective security-by-obfuscation for decades.

More dangerous is the fact that your advocacy creates a false sense of security, literally telling people something is secure when it is not. Seriously, your email here is a dangerous recommendation. For anyone reading, please DO NOT think that approved browser lists actually prevent "unapproved" browser access.

The truism that you can't trust the client is not just a web phenomenon or my opinion; it's a deep cybersecurity principle. You might want to argue with me, but I suggest you do some research before arguing against the combined wisdom of 50+ years of cybersecurity experience.

Seriously, search for "cybersecurity can't trust the client" and you'll see a wealth of diverse opinions explaining in various terms why you actually can't trust the client in cyberspace.

And what we're seeing in the EUDI is the false belief that you can somehow trust "certain" clients, leading to a security architecture that centralizes power in the name of security without actually creating a more secure system.

You may not agree that the bad things that many of us fear are bad. That's fine. Differences in values are fine reasons for differences in policy. However, you cannot legitimately assert that depending on secure clients is effective security. It isn't.

Since the system is ineffective and many people see real harm in its explicit centrality, several of us would love to see the EU shift away from this harmful and inevitably insecure approach.

-j

--
Joe Andrieu
President
joe@legreq.com<mailto:joe@legreq.com>
+1(805)705-8651
________________________________
Legendary Requirements
https://legreq.com



[https://outlook.office.com/mail/?nativeVersion=1.2026.203.300]<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.www.avg.com<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>

Received on Wednesday, 18 February 2026 07:28:37 UTC