RE: Openid Sequence diagram -was: Position Paper for W3C Workshop on Identity

My point is that websso is expensive. Its expensivbe, as its working around
inadequate browser UI. It's yet more expensive if its assured. For the
assurance known as audience controls (which limit which RP can get an IDP's
assertion), one has the "reverse identifier" case - verse to webid that
similarly confirm the user identifier controls a file at an endpoint. The
IDP basically has to establish that the RP site has a magic file (with
certain metadata) present at the https URI, nominally each time an assertion
is released. IF the file is missing, has the wrong elements, or the DNS
record is "unavailable", or the server cert is not trusted, or the cert is
revoked, the RP is no longer a viable audience - and the IDP is not SUPPOSED
to release the assertion. (thank Yahoo for this, as they assured openid2)

So, perhaps we draw SIMILARTIES to openid, by saying we use some of the same
core techniques, and have the same vulnerabilities, limits, and dependency
on "public" infrastructure.

I think the main point on not citing semweb (being all scary) is not the
query language, or even the ontology schemas (een I could get your sparql
query). It's the issue that we/you both in fact touched on in the paper:
that we are ancitipating putting an RDF parser in the browser, so it can
render the users profile page when one looks at the current client cert.
This has to be about AS SCARY as it gets. (So , you may want to remove that
bit.) Its directly re-living stuff in browser lore, from 10 years ago -
which was apparently REALLY unpleasant. Its specifically the case that Harry
H discussed, in his briefing.

Could we imagine not saying on that part? I mean... its directly fanning the
flames of an old fire.


-----Original Message-----
From: Henry Story [mailto:henry.story@bblfish.net] 
Sent: Thursday, April 21, 2011 8:05 AM
To: peter williams
Cc: public-xg-webid@w3.org
Subject: Openid Sequence diagram -was: Position Paper for W3C Workshop on
Identity


On 21 Apr 2011, at 15:32, peter williams wrote:

> I don't find the openid argument at all convincing (even it its 100 
> handoffs).

At all? Surely that is a pose. :^)

> Its not a relevant factor. I have every intention of accepting openid 
> (from google), via the Microsoft ACS bridge for a year (to see if 
> anyone cares about using Google IDs in real estate, understanding that 
> Google just exited its real estate business venture, which lost 
> money). That bridging adds several more handoff steps... and thus the 
> metadata resolution that assures the endpoint of the bridge adds 
> several more, too. The connection and packet load on the consumer PC 
> with broadband is trivial compared to the connection and packet of the 
> average blog page, and is lost in the noise. (I have 100 logins a 
> minute to process.. and have to calculate
> accurately)

Having worked at AltaVista I can tell you that the criteria of efficiency
weighed very heavily there. Anything that delays a page download reduces the
number of people coming to a site, and the number of visits. The work that
was put at the early AltaVista in keeping the front page light was key in
its success. Then the marketing people came over and decided to over rule
the creators and turn the front page into a big billboard for DEC, then
Compaq creations. They zoomed into the Portal bandwagon and never recovered.
Notice how Google's front page is very light?

Facebook decided not to use many  well known vocabularies because apparently
of the cost to add the namespace to the  html. Ok, you may think that's not
serious, but it is an argument that is often used. (I hope I never hear you
use such arguments in the future, now that I know that you don't think that
efficiency is important)


> In a metadata driven world, its just not relevant to count pings in an 
> handshake. One accepts that the first token delivered to setup a 
> crypto session is relatively expensive (and then one uses token 
> caching for subsequent method calls, based on crypto sessions, as in 
> ws-secureconversation at layer 7 - or SSL at layer 4). This is what 
> SSL engineering proved as effective (where the SSL sessionid is just a
token).
> In the web world, that token is the cookie (of course).

In linked data driven world where you are crawling across web sites, jumping
from one site to the other just to fetch a resource, things look different
here.

> The analysis is not even correct. In openid, to be *assured*, the IDP 
> has to ping the RP's XRDS file (i.e. impacting my server farm's TCP 
> queue) - to test from the metadata that the audience is STILL 
> authorized to receive the IDP's assurance.

Oh, so I have missed 1 connection? At which point in the sequence diagram
here is it?
http://blogs.sun.com/bblfish/entry/the_openid_sequence_diagram

Is that because this was describing an old version of OpenId? I thought I
had read it quite carefully at the time. But I was probably just reading
some synopsis.

Is there a correct sequence diagram for OpenID we can see?

(I am not asking rhetorically, but would genuinely like to know the answers
to those questions)

> Then, it has to verify the XRDS endpoint's assurance (using https cert 
> chain, which means walking the cert chain and pinging the OCSP/CRL 
> endpoints of each element, in the general case).

Ah you mean in step 9 of my diagram. It has to verify the TLS connection.
yes, well clearly if OpenId is to be secure it has to do 5 more connections,
or according to you 6 more. Given that these connections have to be TLS
based, things get to be expensive.

> If DANE was used
> instread of OCSP, n UDP pings have to be made against the DNS server 
> instead... to walk the zone chain(s), which requires n pings against 
> the root servers.... So the count suggested for openid is not even 
> accurate. Its not a valid characteristic.

Yes setting up all those TLS connections is not good. It is what makes other
things impossible later.


> We are passed scoring points. 

I think large companies will want to be counting these very carefully. And
so will linked data servers. Efficiency there is very important.

> If one wants to compare, discuss the core differences between any and 
> all websso scheme and their use of metadata for assuring endpoints.

Oops that would bring us into talking semweb. I was told we should not talk
about that. It frightens people.

> The websso
> world solves the browser inadequacies by making the browser irrelevant 
> (as the signaling is site to site, using the browser as a 
> communication bearer, only). WebiD believes (right or wrong) in the 
> browser as king (not as a transfer agent between sites). This is what 
> you need to be saying. Whether or not that argument is convincing... is a
different issue.

yes, we could emphasise the peer to peer nature of WebID. But as this is a
talk aimed at browser vendors...

> 
> 
> -----Original Message-----
> From: public-xg-webid-request@w3.org 
> [mailto:public-xg-webid-request@w3.org]
> On Behalf Of Henry Story
> Sent: Thursday, April 21, 2011 5:41 AM
> To: Kingsley Idehen
> Cc: public-xg-webid@w3.org
> Subject: Re: Position Paper for W3C Workshop on Identity
> 
> 
> On 20 Apr 2011, at 23:08, Kingsley Idehen wrote:
> 
>> Henry,
>> 
>> Very nice!
>> 
>> Observations re. diagram:
>> 
>> 1. step #2 is missing arrow (inbound) from "resource server" to Bob's 
>> browser is missing re. authentication challenge 2. step #3 is missing
> arrow (outbound) indicating what happens after "OK" button is clicked.
>> 
>> Re. OpenID, accentuating OpenID+WebID will help i.e., implying that 
>> OpenID
> implementers benefit immediately from WebID. It's less politically 
> thorny than implying OpenID is inadequate (even though we know it is
etc..).
> 
> yes, that us because it all happens in 1 TLS connection. 
> 
> Is this better?
> 
> 
> 

Social Web Architect
http://bblfish.net/

Received on Thursday, 21 April 2011 15:27:58 UTC