RE: WebID in Browsers conf feedback

 26th may to 12 jun. Its little (given the amount of effort that went into the prep); but what there is... is juicy. One just has to think bigger than few CGIs.
 
 
given the comments, I relaly dont understand the notion of "interrupting an SSL session". An SSl session is a handshake plus a series of connections. Each connection "interrupts the session" in the transport and time sense, and (for connection-oriented SSL) each one rekeys a transport channel with a mini-handshake that exploits session state (and per-connection state, to guard against CPA - chosen plaintext). In advanced crypto hardware for higher end datacenters (i.e. clouds), the session state is seperated from the protocol engines and is shared - much like routing db's adjacency matrix is shared with with a layer 3 router's 10 line card ASICs in a stack of switches, each operating several 10Gbps channels ...autonomously. This allows each line card to do connection handshakes, given a cached copy of the session state. Thus, SSL "session-resume" connections are constantly being "interrupted" to ping the session-state cache (albeit usually over the backplane, operating at 50GBps). Similarly, when such types of SSL are supproting data-centric SAN protocols like FCOE or iscsi that have multi-pathing nature, one wants different line cards representing a given target resource to be able to coooperate to create the virtual SSL channel (2 load-balanced or failover-ready connections says, in parallel, with each set of TCP packets being handled over a different line card) over which the inter-agent commands travel. Remember, SSL allows for Parallel connections (think of its hypermedia design basis, wanting to collect n anchored images in a parallel as the doc is rendered).
 
For webid to have much impact in the SSL vendor world, it needs to be able to tell such a data center and HARDWARE story, and not just talk about simple browsers and web servers agents running CGIs. The iscsi initiator in windows already does IPSec and client certs between these layer 7 entities (over the web), with advanced keying and caching models of the shared security MIB; and SSL needs (with webid support) to do the same class of thing. The triples in the foaf card have to be being handled much like routing dbs (cached and compressed into a form suiting hardward lookup speed, and delta-replicated out to the line cards, vs being "looked up on the fly").  So, in this sense, I agree with the author of the comment - whoever he or she is. But its not the "flow model" that is wrong. The argument needs to shirt to where the flow model MATTERS.
 
At the same time, we have to remember that classical web servers in the PKI tradition already "interrupt the session" (not that this is an accurate phrase) to go fetch CRLs and OCSP responses and cert policies labelled by URIs, for client certs bearing refs to such. Its strange to argue that a protocol concept for https is fine when servers are pulling CRLs... (a fancy list of key material ids), but not fine for pulling foaf cards (a fancy list of web kmids) - in EXACTLY the same circumstance. In my view, its not just the same  circumstance, but the same security enforcement argument - as (in my view) webid is (just) a semantic alternative to CRLs/OCSP. Logically speaking, its strange to argue that whats entirely fine for one validation method is then not fine for another. What is the difference between a CRL pulled based on caching logic and foaf cards, also pulled based on caching logic? (Ignore different mechanics of caching)
 
What the argument seems to fail to recognize, given its base position, is that the commodity world has changed. (Windows) servers no longer "do 'cert chain validation' processing" necessarily. The point I was trying to make in a memo or two ago was that windows allows you to REPLACE the "traditional" cert processing module, dumping X.509/PKI/PKIX semantic of chains all together. PKIX-style logic is still there, if one selects that provider of course - and selection of that class of provider is mandatory still, if you program webapps (vs web services, or federated web services). This is microsofts TRADITIONAL answer to pressure to bias security protocol, ensuring the PC stays USER friendly (becuase one can opt out of the control regimes, imposed on microsoft as a vendor by "backdoor processes") But, in these more advanced web models, one gets EASILY to insert your OWN provider (with non X509, non PKI-centric conception of certs, cert chains, etc). In general, in this modern conception now delivered at commodity status world wide, since .NET4, cert handling providers are essentially reduced to the role of handling certs formats, and cert bag formats - as a mere convenience, imposing ZERO processing rules and their associated chaining or security enforcement semantics. You simply add your own processing rules - exactly as I specialized WCF endpoints to add webid validation provider processing rules, to the windows SSL/https protocol engine. For the first time in nearly 20 years of https, non-vendors get to "customize" the https engine that is, making a custom (non webapp style of) https handshake.
 
Between SSL clients in javascript avoiding the browser vendor, and pluggable providers for key management in servers, the world has changed quite a bit from 2000 (a point in the cryptopoltiics story that many folks are still stuck at... I feel). It must be VERY frigtening to those who want the 2000-era  web crypto to stay just where it was (in the hands of a few trusted vendors, with well indoctrinated staff, trusted to "do the right thing).
 
 
 
 
 
 
 
 
 
 
 
 
> From: henry.story@bblfish.net
> Date: Sun, 12 Jun 2011 15:12:14 +0200
> CC: d.w.chadwick@kent.ac.uk
> To: public-xg-webid@w3.org
> Subject: Re: WebID in Browsers conf feedback
> 
> Here is some feedback from the WebId in Browser conf. 
> 
> Henry
> 
> On 26 May 2011, at 14:00, David Chadwick wrote:
> 
> > Hi Henry
> > 
> > I got some good feedback from people at the workshop, which you should consider in a revision of the protocol.
> > 
> > 1. You should not interrupt an SSL/TLS session midway (to fetch anything, either a remote page and/or the remote server's cert).
> 
> > The solution to this would be either
> > a) get the browser to issue self signed certs for the user (the best solution), or
> > b) get the browser to send the user's server signed cert plus the server's cert which has been signed by a known root CA during the TLS handshake. In this way the receiving server can validate the signature chain without having to make a call out. However this would still mean modifications to the SSL software similar to that used by proxy certificates in their chain validation (since the signing servers' cert is flagged as an end user cert and not a CA cert, so it isnt allowed to issue certificates to end users. Consequently standard X.509 cert chain processing software will fail.)
> > 
> > 2. it is not a good idea to ask a server to go and fetch any remote page (in this case the user's web id page) since an attacking user can point the server to poisoned pages that can contain any arbitrary code to be executed by the fetching server.
> > 
> > I am not sure what the solution to this is Internet scale, since the whole process hinges on user's being able to point to arbitrary web id pages. For small scale use you can have white lists of known trusted servers and only allow user's to store their web id pages on these trusted servers.
> > 
> > 3. Some people questioned how usable the PGP type web of trust would be, and whether it would scale to Internet proportions. One comment was I would not want the semantic web crawling to be more than two links deep as I cannot trust anything further removed from me than that. I think that in order to establish links between people at Internet scale you need around 7 links in order to connect most people together.
> > 
> > Hope these comments are useful
> > 
> > regards
> > 
> > David
> > 
> > 
> > *****************************************************************
> > David W. Chadwick, BSc PhD
> > Professor of Information Systems Security
> > School of Computing, University of Kent, Canterbury, CT2 7NF
> > Skype Name: davidwchadwick
> > Tel: +44 1227 82 3221
> > Fax +44 1227 762 811
> > Mobile: +44 77 96 44 7184
> > Email: D.W.Chadwick@kent.ac.uk
> > Home Page: http://www.cs.kent.ac.uk/people/staff/dwc8/index.html
> > Research Web site: http://www.cs.kent.ac.uk/research/groups/iss/index.html
> > Entrust key validation string: MLJ9-DU5T-HV8J
> > PGP Key ID is 0xBC238DE5
> > 
> > *****************************************************************
> 
> Social Web Architect
> http://bblfish.net/
> 
> 
 		 	   		  

Received on Sunday, 12 June 2011 16:44:15 UTC