Re: Surveying names for trust states above and below a single VC or DID.

I've already said a good deal on the github thread, but I will restate my
perspective here, maybe it will come across cleaner.

There are 2 concepts we are trying to standardize / improve the
standardization of....

1. Establishing an Identity ( DID / id_token )
2. Establishing attributes or capabilities of an Identity. ( VC / Z-Caps /
access_token )

In the OIDC world, we rely on a trusted issuer to provide id_token and
access_token to entities, and holding them grants access to systems.

When we use OIDC we are trusting:

1. The OpenID Provider (OP)... Google / Okta / Microsoft ... These guys
have private keys they use to sign the tokens... if they are compromised,
we are impersonated...

https://news.ycombinator.com/item?id=23362149

"I found I could request JWTs for any Email ID from Apple and when the
signature of these tokens was verified using Apple’s public key, they
showed as valid. This means an attacker could forge a JWT by linking any
Email ID to it and gaining access to the victim’s account."

2. We are trusting OIDC as well, in a way this is like trusting a DID
Method... we are trusting a standardized way of establishing identity... in
the case above... that's what happens when you don't implement the standard
correctly.... I am not trying to hate on Apple or OIDF... there are
standards and there are implementations... we need to be able to trust both.

3. We are trusting the relying party (RP), here are some of the things we
are trusting the RP to do...
https://infosec.mozilla.org/guidelines/iam/openid_connect.html#session-handling


When using DIDs and VCs we are making some similar trust assumptions.

1. Instead of trusting the OP, we are trusting the DID Controller to have
good opsec.
2. Instead of trusting OIDC and TLS, we are trusting the DID Method and any
ledger / crypto associated with it.
3. We still trust Relying or Requesting parties to handle our data
correctly.

Regarding material on either side of the VC / id_token...

There is material collected by an issuer that is used to cross check
credential data / support authentication....

There is material collected by a verifier that is used to cross check the
credential data / support authentication...

When someone gets a passport issued, they may be required to present
multiple documents, papers... This DIF spec defines a format for making a
presentation of material...
https://identity.foundation/presentation-exchange/ ... this might be the
first step in obtaining a credential... in other words... authenticating
and providing / presenting documents and credentials is a thing that comes
before receiving a credential.

I'm not actually sure how the no fly list works, but I assume it's a list
that is checked regardless of if the traveler has a valid drivers
license... so this is information a verifier uses to decide if the
credentials provided  are sufficient for a holder to proceed..... Another
example is tainted bitcoins, which may have at one time been held by a dark
market.... when the DOJ auctions them off, they sell for less, in part
because every time you use them, you will trigger the flags associated with
suspicious activity (if the vendor uses Chainalysis or similar...).. you
can read more about this topic here:
https://www.bitcoininsider.org/article/81896/theres-no-such-thing-tainted-bitcoins

The bitcoin will transfer (if the signature is valid). The digital passport
will verify (if the proof is valid)... but what the verifier does in the
way of additional processing beyond that, is up to them... There are a lot
of cases where asking for additional checks makes a lot of sense.... like
when you are first interacting with a party, or if you have not seen them
in a long time...

I personally don't like the words "Verifiable Credential" :) mostly because
"verify" is used to describe all these scenarios in at least 3
representations:

1. The data is in the expected format ( validation )
2. The data contains all the required fields ( validation )
3. The data contains no unexpected fields ( sanitization / validation )
4. The data contains a signature, and a key identifier (the data was signed)
5. The signature is verifiable (the key identifier can be dereferenced to
public key bytes)
6. The signature is verified (the data was signed by the key produced by 5)
7. The signing key is "active" / "not revoked" (the key is "current" / not
in a key revocation list of a PGP server)
8. The data issuance date is in the past (the credential is not forward
dated)
9. The data expiration date is not passed (the credential has not expired)
10. The credential subject is not in some deny-list (untrusted subject....
like suspected terrorist)
11. The credential issuer is not in some deny-list ( untrusted issuer....
like a suspected compromised issuer... like apple before they patched the 0
day above ; )

The VC Spec really focused on 1-9... It correctly left 10 and 11 to the
verifiers to decide... however not all VC representations chose to define
1-9 fully... so what "verify" means... turned out to not really be as
helpful as we might have hoped....especially when applied to the general
concept of a "VC" as opposed to "Ed25519Signature2018 Linked Data Proof" or
"EdSA VC JWT", or a Hyper Ledger Indy CL Signature Proof....

The VC Data Model didn't have OIDC to define key lookups... and it didn't
have a single representation, so data sanitization is handled differently
for each representation (or not handled)... it didn't rely on a single
cipher suite, like IANA JOSE... so all the crypto is represented
differently.... and because it didn't have the did spec or OIDC.... the
concept of "revoked" was really tough to define... because there were no
DID Documents or well known JWKs.... there were no key servers (PGP was
strangely not supported formally)....

These issues, coupled with 2 semi well defined representations (JWT and
JSON-LD) and a pseudo defined ZKP / LD representation... make it very hard
to understand what "verify a vc" actually means.

But IMO this is what it means:

1. The VC matches the format described in the VC Spec
2. The VC has a proof.
3. The VC Proof can be checked now, and maybe for points in the past as
well. ( depends on your VC and on your issuer / subject identifier choices
).
4. The VC has date material for issuance and expiration, and even if the
signature is valid, if the dates are wrong the VC does not verify.

I prefer to separate the concept of data model conformance and
cryptographic checks from business logic / credential processing logic...

This is probably a good time for me to stop rambling and say.... despite
all its issues, the VC Spec accomplished a lot and the DID Spec will cover
many of the obvious gaps in the VC spec, particularly around the revocation
of keys and thereby the revocation of credentials.

OS



On Mon, Sep 14, 2020 at 4:25 PM Christopher Allen <
ChristopherA@lifewithalacrity.com> wrote:

> On Mon, Sep 14, 2020 at 12:56 PM Orie Steele wrote on a github DID-WG
> issue "Re: [w3c/did-core] need to clarify revocation vs. rotation (#386)
> <https://github.com/w3c/did-core/issues/386#issuecomment-692279254>":
>
>>
>>    1. "verification" is not just does the signatures match.... its what
>>    is the trust context for this... how old is this, how good is the opsec of
>>    the issuer, etc....
>>
>> This raises a problem for me which is that we don't have good language
> for DIDs and VCs in their intermediate states, above and below, and in
> particular between conforming to the data model and "verifiable" and then
> continuing onward toward satisfying a complex trust context.
>
> * Clearly one desirable state is "Verifiable" — but doesn't that mean it
> is not verified yet? Clearly in VCs that is true if nothing more than that
> the spec has no required trust model. So lets set that as the middle
> —"Verifiable" is some level of conformity where you have sufficient data
> and proofs such that you can say the VC (or DID) can be verified later.
>
> * What are states below this level, including both error states (invalid,
> revoked, missing information), but also intermediate states which include
> that the data is valid but you don't understand the proof (or one of the
> proofs)?  Or things like understanding or not understanding all the
> context, but you have enough to know you have what you need? What are these
> "pre-verifiable" states called?
>
> * What are states above the "verifiable" level, including needed other
> DIDs or VCs referred to that also need to be fetched before the DID or VC
> can be fully passed to a trust model for final approval? What is actually
> called when you've confirmed everything (all the linked data outside of the
> DID VC) is verified, but you've not checked things like out-of-band
> revocation? What is it called when you've not passed it through a trust
> model? What is the ultimate result called, when you've done all the work,
> and the trust model at the end says "Ok"?
>
> I'd really like to see some clarity here, as when I'm working with others
> who don't have 5+ years of socializing on VC and DID issues get very
> confused because our current major platforms use different language for
> these states. And the insiders that do have that socialization are making
> assumptions about similar words of others that may not be correct.
>
> For now, can we start with a survey? Please share what YOU call these
> intermediate states above and below a "Verifiable Claim" specifically, and
> also if they are different from the same states above and below a DID?
>
> In particular, I'd love Sam to say what they are for KERI, someone from
> Sovrin, someone from DIF, and someone from Digital Bazarr.
>
> Thanks!
>
> — Christopher Allen
>
>
>

-- 
*ORIE STEELE*
Chief Technical Officer
www.transmute.industries

<https://www.transmute.industries>

Received on Tuesday, 15 September 2020 00:49:18 UTC