On 21 Nov 2015, at 16:41, Dave Longley <dlongley@digitalbazaar.com> wrote:

On 11/21/2015 11:30 AM, Steven Rowat wrote:
On 11/21/15 7:31 AM, Dave Longley wrote:
On 11/21/2015 02:11 AM, Anders Rundgren wrote:
I'm interested hearing what's available and what's cooking:
http://indiewebcamp.com/NASCAR_problem

Just the core (and links), no TL;DR BS please.

There's a very simple demo here:

https://authorization.io


Interesting. But I'm not sure it functioned as intended in my browser.
Some steps were fully Graphic UI, whereas others, confusingly, printed
full code on screen.

Yes, the demo is very rough. I sent it primarily for Anders with very
little information, per his request.


I.e., in part of step two, at "Issuer Dashboard" what's between the
'++++++' below appeared instead of buttons that I was expecting. This
happened at two other places; but a GUI was fully functional before and
after, in other steps. Is this the way it's supposed to function at
present? [I'm using Firefox 43 on Mac OS 10.6.8]

If you'd like to try something a bit more fleshed out (but still rough
in places and untested on Mac + Firefox), you can take a look at the
demo we presented on the Credentials CG call a little while back. I
recommend using Chrome.

1. Go to https://demo-idp.identus.org. Sign up for a fake/demo account
there.

2. Go to https://demo-issuer.identus.org. Login in and issue yourself
some credentials.

3. Go to https://demo-consumer.identus.org. Go through the various
demos, presenting credentials from step #2.

This demo has only been tested on Chrome on Windows and Linux, and
Firefox on Linux. Using Ubuntu requires no special configuration, but I
know that if you're using Debian you have to click a box to allow 3rd
party sites to access localStorage via an iframe ... so maybe there is
something that is similarly required for Firefox.

The demo shows three of the players in a potential future Identity
Credentials ecosystem: Identity Providers, Issuers, and Consumers. As a
credential holder, you are in the middle of those players.

You use an IdP to store, provide, and manage your credentials.
Authentication with your IdP is just username+password at the moment,
but this could be anything in the future. The same authentication that
occurs at the issuer (credential-based authentication) can also be used
once you have a decentralized identity.

Issuers issue credentials to you that you store at your IdP ... where
the issuer makes a browser API call to request storage (potential target
for standardization at W3C). Credentials are digitally-signed so that
consumers only need to trust issuers in the system, not your IdP. This
gives you agility to choose/change IdPs as you see fit -- or to use
whatever IdP you want (run your own, etc).

Consumers consume credentials. They may ask for them using a browser API
call (again, target for standardization). The browser will figure out
who your IdP is so you can provide the credentials.

The browser API is polyfilled using this library:

https://github.com/digitalbazaar/credentials-polyfill

It is meant to be an extension to the Credential Management API that
Mike West has been working on:

https://w3c.github.io/webappsec-credential-management/

The Credentials CG has been in communication with him to ensure its
design allows for these extensions.

In additional to the browser API, another piece is required to polyfill
what the browser must do. The browser needs to be able to discover
user's IdPs; this is polyfilled by the authorization.io website. Part of
the Credentials CG work is to make it possible for people to be in full
control of their own identifiers. This is what the "Decentralized
Identifier" (or DID) is about. This is basically an identifier that you
can claim cryptographic ownership over (by generating public/private
keypair and doing a claim).

I have a feeling that the missing piece here is that of the resource on which the request 
is being made by the user. I prefer to think of resources needing authentication, 
rather than whole sites, as different resources may require different levels of
authentication. We get this on social networks all the time: some posts
require one to be friends with someone, other family, and others are public.
Many people don't have too many accounts, as it is a lot of work to maintain one.
We should envisage cases that are more complex.

Our use case makes the hyperdata structure of the web site very visible,
so that it may well be that the same client needs access to 

  (1) http://shop.example/drinks/beer/buy     <- proof of age
  (2) http://shop.example/profile/123/        <- proof of account creation
  (3) http://shop.example/bike/bmwk1200rs/buy <- proof of drivers licence 
  (4) http://shop.example/xxx/kalashnikov  <- proof of participation in well regulated militia

The client may have as policy only to ever authenticate when needed and with
the right credentails, as it is interested to make sure that the prices are 
not being particularly tailored to the higer value for its profile.

 So we take it as a given that following RESTful architecture, it is 
resources that require authentication. Hence we make it easy for any 
resource to describe the type of agents and individuals that need access.
As shown on the Web Access Control page diagram, each resource (may) point to
a description showing what types of agents have access.



http://www.w3.org/wiki/WebAccessControl

In that diagram the <2013/card> document there is open to read by anyone, and the </2013/protected>  resource is only visible to members of a specific group.

At present we have only considered WebID authentication as that was good
enough for our current needs, for a few small teams, where we need to 
explore many other things including UI frameworks, LDP, etc...

But of course the description of the agents that get access to a resource could
be much richer. The class of agents could be described as the class of people
who can prove they are over 18, and that the site accepts certain actors as
verifiers of such proof. There may be a well established list for each country of
such actors. In the case of driving licences it is clear that for most countries in 
the world the list of agencies able to proove the ability to drive is quite restricted.
So each country could list its agencies, and list the agencies in different countries 
that fullfill the same role. 

A client that comes across such a resource would then 

1. receive a 401 response
2. follow the link to the Web ACL resource
3. discover that it needs to show proof of a driving ability, and what the relevant 
   authorities for such claims are. ( the site may point to a few, but the client
   may also compare that with its trust anchors, to find an overlap ). For example
   in the US opening a bank account may be good enough to get a rifle, but it
   Switzerland it requires being active in the milia .
4. check the Credentials Wallet for any such claim
  a. if such a claim exists ask the user ( or follow a policy under the users control for such situations )
  b. if such a claim does not exist, alert the user, and provide means of him aquiring the credentials. This may require either just going to the authority and getting the existing credentials, or doing a full driving course, and passing the tests.
  c. one may want to discover on the side wether the sales requirements are actually legally
sufficient for one's own authorities. I may be able to buy a gun in the US, but not allowed to
with those credentials in france. 

5. if the user provided the credential use it for the next attempt to accomplish the action

So to summarise: to solve the NASCAR problem the client needs to 

1) have some way to access the list of credentials of the user,
  and their properties
2) know which credentials are usable for the resource
3) be able to discover how to create appropriate credentials

The Web Access Control "API" provides the minimum needed to answer 2) and
3) . With this it then becomes possible for the client to retrieve the
right set of credentials.

Because there are so many credential possibilities, and so many 
attributes that may need to be verified, it is clear that this has
to be build up from the beginning in an extensible manner.

Btw, issuers themselves can have WebIDs, and I developed the notion
of institutional web of trust in 2011 at the eID conference
in Switzerland

   http://bblfish.net/blog/2012/04/30/



The concept is similar to getting a bitcoin wallet, but without the
unnecessary complexities of the blockchain or mining, etc. Once you have
this identifier, which is a URL with the new scheme "did", you can
associate credentials with it. You can fetch that URL and get back
public information about the identity, such as the IdP that is presently
associated with it. The IdP acts as an agent for storing/getting
credentials for that identifier.

It should be noted, that credentials can be associated with any URL, for
example an HTTPS one. So the system is designed to work with these as
well -- and the technology could be standardized in steps over time. The
first "baby" step could involve registration of your IdP with the
browser rather than with a decentralized network. Of course, when
registering with the decentralized network, you get better portability
characteristics and so forth. The decentralized network or the
technology for it isn't built out yet, it is polyfilled, also by
authorization.io.

Hopefully this explains some of the background and the concepts that are
being experimented with here.

One more quick note about how the authentication works. When credentials
are selected at your IdP, they are wrapped up as an "Identity" (and
identity being a collection of identity credentials that assert various
claims about the identity). This identity and the target domain (the
consumer's) is digitally-signed at authorization.io (which would be the
browser in the future, it is just a polyfill). The digital signature
includes an identifier, a URL, for the public key used.

If the URL has a "did" scheme, the related DID document is retrieved
from the decentralized network (polyfilled by authorization.io). Based
on some cryptography and a ledger), the consumer can trust the
authenticity of the DID document and can look for the public key therein
-- and then check the signature. More details about this decentralized
network are slowly being worked out here:

http://opencreds.org/specs/source/webdht/

If the URL has an "HTTPS" scheme, the CA system can be piggy-backed off
of, using the WebID protocol for authentication. Essentially, the public
key URL can be dereferenced, returning Linked Data that asserts the
owner of the key, which is a WebID, another HTTPS URL. That WebID URL
can be dereferenced to get more Linked Data that will assert that the
public key is truly owned by that WebID. The trust here, again,
leverages the existing CA system.


-- 
Dave Longley
CTO
Digital Bazaar, Inc.