W3C home > Mailing lists > Public > public-xg-webid@w3.org > June 2011

Re: WebID in Browsers conf feedback

From: Henry Story <henry.story@bblfish.net>
Date: Sun, 12 Jun 2011 15:12:14 +0200
Cc: Chadwick David <d.w.chadwick@kent.ac.uk>
Message-Id: <F825B001-B5E9-429D-A661-E3F66C9040B4@bblfish.net>
To: WebID XG <public-xg-webid@w3.org>
Here is some feedback from the WebId in Browser conf. 


On 26 May 2011, at 14:00, David Chadwick wrote:

> Hi Henry
> I got some good feedback from people at the workshop, which you should consider in a revision of the protocol.
> 1. You should not interrupt an SSL/TLS session midway (to fetch anything, either a remote page and/or the remote server's cert).

> The solution to this would be either
> a) get the browser to issue self signed certs for the user (the best solution), or
> b) get the browser to send the user's server signed cert plus the server's cert which has been signed by a known root CA during the TLS handshake. In this way the receiving server can validate the signature chain without having to make a call out. However this would still mean modifications to the SSL software similar to that used by proxy certificates in their chain validation (since the signing servers' cert is flagged as an end user cert and not a CA cert, so it isnt allowed to issue certificates to end users. Consequently standard X.509 cert chain processing software will fail.)
> 2. it is not a good idea to ask a server to go and fetch any remote page (in this case the user's web id page) since an attacking user can point the server to poisoned pages that can contain any arbitrary code to be executed by the fetching server.
> I am not sure what the solution to this is Internet scale, since the whole process hinges on user's being able to point to arbitrary web id pages. For small scale use you can have white lists of known trusted servers and only allow user's to store their web id pages on these trusted servers.
> 3. Some people questioned how usable the PGP type web of trust would be, and whether it would scale to Internet proportions. One comment was I would not want the semantic web crawling to be more than two links deep as I cannot trust anything further removed from me than that. I think that in order to establish links between people at Internet scale you need around 7 links in order to connect most people together.
> Hope these comments are useful
> regards
> David
> *****************************************************************
> David W. Chadwick, BSc PhD
> Professor of Information Systems Security
> School of Computing, University of Kent, Canterbury, CT2 7NF
> Skype Name: davidwchadwick
> Tel: +44 1227 82 3221
> Fax +44 1227 762 811
> Mobile: +44 77 96 44 7184
> Email: D.W.Chadwick@kent.ac.uk
> Home Page: http://www.cs.kent.ac.uk/people/staff/dwc8/index.html
> Research Web site: http://www.cs.kent.ac.uk/research/groups/iss/index.html
> Entrust key validation string: MLJ9-DU5T-HV8J
> PGP Key ID is 0xBC238DE5
> *****************************************************************

Social Web Architect
Received on Sunday, 12 June 2011 13:12:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:45 UTC