Re: Discussion of did:cel and did:webvh -- this Thursday, Jan 15 9:00 Pacific / 18:00 Central Europe

Hi!

I just watched the recording, interesting stuff.

A couple of points.

As a web service provider, I mainly care about easy access to a durable,
verifiable, and preferably sybil resistant identifier that I can bind
meaning to within my service.

As a zero knowledge web platform provider, I mainly care about the **user
agent’s** easy access to a durable and peer verifiable identifier that
comes paired with a deterministic raw key for symmetric encryption, and
either the ability to directly sign a challenge and export a public JWK I
can use for verification, or a less optimal second raw key for symmetric
verification.

Why? Because this lets me safely construct application state from encrypted
data found locally or restored from cloud backup.

First, I get a high entropy, non semantic identifier that I can use to
locate a resource, even though it has no meaning outside the user agent.
Second, I can prove access to a key related to that identifier before
receiving cloud state or connecting to a real time forwarding unit.
Third, I can encrypt and decrypt snapshots or deltas of the resource with
actors connected to the resource’s base point.

If I start by binding these user agent discoverable credentials to an
account that is just another opaque blob with a high entropy identifier, by
first accessing them on the user agent, creating another set of
(OpaqueIdentifier, Sign Verify Key or Key Pair, EncryptionKey), and storing
them in a way discoverable only via a user agent bound credential, I get a
stable, backed up identity for a user or account without knowing anything
about the user or even which files contain the credential, let alone being
able to view them in a meaningful way.

With that stable identity, I can build a distributed state system where
actor software is given that identity and all replicas can later verify it,
achieving full end to end encryption and decentralized access control with
eventual consistency.

While this does not directly relate to DID methods in general, I think that
if the goal is a private and decentralized web, the focus has to be on
making one thing possible, a client side discoverable credential that
somehow includes (OpaqueIdentifier, Sign Verify Key or Key Pair,
EncryptionKey).

I am currently writing a draft spec and planning a minimal implementation
that demonstrates this using WebAuthn as the initial source of
discoverability. The result would be the ability to fetch and decrypt
application state, as well as a model where peers keep each other
accountable by making local state authoritative, verifiable, and eventually
consistent.

However you decide to move forward, I personally think that client side
discoverability of identifiers and keys is very important and should work
even without a network. navigator.credentials is a reasonable place for
this. WebAuthn is quite good, but it lacks flexibility in what you can sign
in a way that can be cleanly verified with the public key. I would prefer
signing just a nonce/challenge instead of the entire WebAuthn payload, for
example, and therefore plan to use symmetric material from the PRF
extension.

It is not a problem here if a server sees the symmetric material, since it
does not unlock the encrypted state and is a pseudo-random, and it only
allows the server to filter out traffic that cannot prove access to the
resource. Still, having a public key option at that first step would be
ideal. In any case, cross platform keys are now backed up via Apple and
Google keychains, so a decentralized alternative for cross platform
WebAuthn credentials would be very welcome.

That is what I wanted to share. I hope it sparked some new ideas. Thank you.

PS. Sharing resources would happen by discovering peers through another
trusted channel either a directly messaging a link or public profiles and
then creating a key + url exchange protocol there.

Also, if we could start by standardizing how to create applications with
zero knowledge state, we could then move on to defining a zero knowledge
resource sharing protocol similar in spirit to OAuth. On top of
standardized web objects like iCalendar, this could make the web fully
private and decentralized without giving up features users are used to.
Except, of course, when a service actually needs to process plain data for
automation or similar purposes. In that case, it would behave like a peer
or third party, making it easier to honor the least amount of exposure for
the least amount of time, with explicit user permission.

Finally, when it comes to key rotation, I am not planning to include it for
now. In most scenarios, the impact of a compromise is scoped to a single
resource or, in the worst case, a single account. I do think CRDT based
rotation would be possible, as well as garbage collection using an ACK
style method where updates up to a certain point are acknowledged by all
known non revoked members of a given resource before causing information
loss.

>

Received on Thursday, 15 January 2026 23:54:02 UTC