W3C home > Mailing lists > Public > public-xg-webid@w3.org > February 2011

RE: [foaf-protocols] non issue comments on webid IX draft

From: peter williams <home_pw@msn.com>
Date: Mon, 21 Feb 2011 23:36:42 -0800
Message-ID: <SNT143-ds957A7E61468202B7BB04092D80@phx.gbl>
To: "'Henry Story'" <henry.story@bblfish.net>
CC: <foaf-protocols@lists.foaf-project.org>, "'WebID Incubator Group WG'" <public-xg-webid@w3.org>
Consider adding a test case (or 2) then.


Large number of false URIs test: If the max len cert message in a record
layer is 32k (and we need to check that), we can have a cert with largish
n-1 distinct tiny URIs to a null document, and 1 URI with a viable document.
A correct implementation WILL use the viable document. An incorrect
implementation will not.


Mixed number of true/false URIs test: If the max len cert message in a
record layer is 32k (and we need to check that), we can have a cert with
largish n-1 distinct tiny URIs to a huge document with no cert declarations,
and 1 URI with a viable document delivered with HTTP no-cache header. A
correct implementation will process the huge document n-1 times. An
incorrect implementation will not.


Redirect test: If the max len cert message in a record layer is 32k (and we
need to check that), we can have a cert with largish ordered set of n
distinct tiny URIs n-1 of which redirect to the next tiny URI in the order,
and 1 which is the last URI in the order that points to a viable document. A
correct implementation of the verification agent will redirect n-1 times and
process the viable document. An incorrect implementation will not.






From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of Henry Story
Sent: Monday, February 21, 2011 4:00 PM
To: Peter Williams
Cc: foaf-protocols@lists.foaf-project.org; WebID Incubator Group WG
Subject: Re: [foaf-protocols] non issue comments on webid IX draft



On 21 Feb 2011, at 23:22, Peter Williams wrote:

I may be 12 months too early in this topic (though it may influence how the
webid spec's requirements are formulated). If you think so, Ill forward to
the w3c list.
To drive the point home, comprehend all of section 3.1 (ignoring step #6)
and answer the following questions, says the school exam:-
if the verifier receives a cert with 100 SAN URI entries, and the first
entry tried has failed to obtain a document whose contents matches the cert,
what should the verifier do?
a. exit and refuse access
b. try the next SAN URI, but no more than reasonable
c. having tried 49 URIs, exit at 50 if it fails since 50 is half of 100
d. iterate through all 100, till one works
e. none of the above.


I think any of the above. If you go into a shop and someone asks you for a
credit card and you give them one that is wrong,

they don't have to test all 50 of them. They don't even have to let you in
the shop to start off with. After all they can do server maintenance.

Should the spec imply the answer to the above?


Not sure why the spec should say anything on that subject.

if the verifier scans an XHTML document with well formed RDFa that matches
the client cert but notices that that document has javascript with malicious
content, what would be best to do:
a. cache the document, and proceed to access control
b. proceed to access control, but refuse to cache the document
c. reject the document and try the next URI in sequence
d. abandon the protocol run
e. none of the above.
Should the spec imply the answer to the above?


I don't see why the verifier should bother with running the javascript in
the page. The html is the declarative part of the document. The javascript
transformation is something that the recipient can choose or not to execute
on that information.

if the verifier consults a URI reputation source before attempting to
contact the resource and notes that the URI is in an IP  block known to
affiliate with criminal syndicates, what should the verifier do?
a. reject the certificate and then ask for a new client certificate
b. reject the cert and the blacklist the IP address of the sender locally
c. ignore the URI, but try the next one in the list
d. try a different DNS server for the authority in the URI, to see if there
is a better reputation opinion there
e. let google worry about it by only relying on Google websso (since Google
processes the webid issues).
Should the spec imply the answer to the above?


I don't think the spec should provide answers to those authorization
questions. The spec should deal with authentication not authorization. We
can start work on authorization as a side project. That is what the acl work
was on. I am waiting to get webid working on Clerezza before entering that






From: henry.story@bblfish.net
Date: Mon, 21 Feb 2011 22:03:12 +0100
CC: foaf-protocols@lists.foaf-project.org
To: home_pw@msn.com; public-xg-webid@w3.org
Subject: Re: [foaf-protocols] non issue comments on webid IX draft

This really belongs to the WebID mailing list, as we are discussing the spec


On 21 Feb 2011, at 19:17, peter williams wrote:




Let me criticize the spec, as will real security experts (not that I can
claim to being one, being a mere "enthusiast").


Sayeth the spec, the resource server MUST ping all servers which are listed
in the SAN array of URIs. If I make a cert with 10,000 URIs and none of the
foaf:agents' RDF documents match the client authn client cert's pubkey, the
resource server MUST ping upto 10,000 other servers - chosen by the


I don't know where you find that.


In section 3.1 it says "with at least one" quite clearly



The  <http://www.w3.org/2005/Incubator/webid/spec/#dfn-verification_agent>
Verification Agent must attempt to verify the
<http://www.w3.org/2005/Incubator/webid/spec/#dfn-public_key> public key
information associated with at least one of the claimed
<http://www.w3.org/2005/Incubator/webid/spec/#dfn-webid_uri> WebID URIs. The
Verification Agent may attempt to verify more than one claimed
<http://www.w3.org/2005/Incubator/webid/spec/#dfn-webid_uri> WebID URI. This
verification process should occur either by dereferencing the
<http://www.w3.org/2005/Incubator/webid/spec/#dfn-webid_uri> WebID URIand
extracting RDF data from the resulting document, or by utilizing a cached
version of the RDF data contained in the document or other data source that
is up-to-date and trusted by the
Verification Agent. 





 Can a server impose a local limit - one process more than 5? Don't process
ANY, if more than 5 present, etc?



Implieth the spec,  the resource MUST trawl the web each time a client authn
event is received (60 a second. on average?). This seems to imply that
resource servers MUST be able to choose to (i) ignore events selectively
(addressing DOS,DDOS using velocity counters) (ii) ignore based on local
criteria.  Since one doesn't want those 59 attacking events on the internal
network, probably the firewall SSL interceptor needs to be doing the
filtering and selecting (on behalf of the array of web servers)


Where do you find that said in the spec precisely? If it is saying that then
it is wrong.



Assuming that the document is pulled, one has to allow resource server (or
more likely the firewall) to scan the inbound content (for malicious
javascript, say); since the URI is user-sponsored.


The Relying Party (to use http://identitycommons.org/ vocabulary ) does not
need to exectute anything in the certificate. Clearly there is not reason to
do so. And if there were, I don't think that that would be up to this spec
to talk of, since we don't require javascript in the certificate.



It was unclear in the way the spec lays out the verification steps in
sequence (i) having performed SSL mutual auth merely to learn the webid
(from parsing a SAN field on the inbound cert) and (ii) having pulled the
document referenced by the SAN URI, whether an ADDITIONAL handshake of SSL
is required. (This is what I read the spec to say, note. Clear this up, if
only 1 handshake is required).


yes, that's odd. I am not sure how that ended up in the spec



The spec is very unclear on what happens when an SSL resume handshake is
used (and the server supplies the client cert to the servlet/CGI consumer
based on its SSL session state, rather than from the wire).


Can you develop this a bit? (don't go into too much length, a few pointers
to the right spec, or descriptions of the problem, and how it affects us
will do)




foaf-protocols mailing list


Social Web Architect




Social Web Architect

Received on Tuesday, 22 February 2011 07:37:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:42 UTC