W3C home > Mailing lists > Public > public-webid@w3.org > October 2015

Re: http signature and WebID

From: <henry.story@bblfish.net>
Date: Wed, 14 Oct 2015 11:47:40 +0100
Cc: W3C Credentials Community Group <public-credentials@w3.org>, public-webid <public-webid@w3.org>
Message-Id: <BCF8B5DA-8DDA-415E-9D00-C2C401D4270E@bblfish.net>
To: Manu Sporny <msporny@digitalbazaar.com>

> On 14 Oct 2015, at 04:06, Manu Sporny <msporny@digitalbazaar.com> wrote:
> On 10/13/2015 04:10 PM, henry.story@bblfish.net wrote:
>> • In section 3.1 is written [[ The following sections also assume 
>> that the "rsa-key-1" keyId refers to a private key known  to the 
>> client and a public key known to the server. ]] it is a bit weird to 
>> have a string refer to two different things simultaneously. It seems 
>> like a way to pave the way for confusion. ( Just a worry )
> Another way of stating this is that "this is a key identifier that can
> be used by both the client and the server".
> What do you think about this text instead?
> """
> The following sections also assume that the "rsa-key-1" keyId identifies
> a public/private keypair that can be used by the client to create
> digital signatures and by the server to verify digital signatures.
> """

+1 That is much better.

>> • What about an upload mechanism to allow a client to upload its 
>> certificate to the server, so as to not require the communication to 
>> depend on another server being up. Proposed one option for this here:
>> https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0388.html
> We've identified key registration as specifically out of scope for the
> HTTP Signatures spec. A number of organizations do it in different ways.
> We do mention a key registration protocol in the LD Signatures spec, but
> that registration protocol should probably be removed into its own spec:
> https://web-payments.org/specs/source/ld-signatures/#the-key-registration-process

It make sense to have it seperate. 
To be honest, I get goose bumps when I see ".well-know" being used.
Come to think of it something as simple as a Link relation to an LDP container
would do the trick for me.

>> • keyId: It would be useful if the client could make sure when the 
>> KeyId could be a dereferenceable URL. This would allow a client to 
>> not have to upload the certificate when making a connection. 
>> Something like that is described in: 
>> https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0388.html
> Again, out of scope for HTTP Signatures. We're trying to keep the spec
> lightweight. In general, though - if the keyId is a URL, the assumption
> is that it is dereferenceable and data should exist at the endpoint. For
> example, here's one of my keys that I should be able to use in an HTTP
> Signatures exchange:
> curl -sk https://dev.payswarm.com/i/manu/keys/4

yes, I agree that one should not force dereferenceability. But that
is covered by specifying that the value of keyId be a URI or IRI.
There are dereferenceable and non dereferenceable URIs, and each
URI has different lookup mechanism ( if any ). 
But if the spec does not specify the string as being a URI, then
it becomes impossible to tell when a string is meant as a URI at all
and when it is not. That would then require some form of 
additional mechanism to specify that the string is meant as a URI
(perhaps a keyIsURI attribute) which seems pretty awkward.  

Perhaps one could allow non uris to be specified by having them
start with "_:" eg "_:keyIdentifier" . But one needs some mechanism
like this to be defined in advance.

>> It may just be best if the spec was clear that the keyId must be a 
>> URI, but could be a relative one, which may be access controlled,
>> and so only known to the server itself. Otherwise one is not quite
>> sure how to interpret this.
> The keyId is meant to be an opaque string. That opaque string can be
> interpreted by other specs and systems in more specific ways. Again,
> trying to limit the complexity in the HTTP Signatures spec while
> enabling other specs to build on top of it.

As mentioned above that then either:

a. forces servers to try to guess what is meant, by analysing the string to determine if it looks like a URI. That does not seem like a good path forward.
b. requires an external attribute to be added such as keyIsURI
c. requires out of band communication, which makes the protocol less general, and ties
one to knowledge of a particular server setup ( perhaps forcing one to go to
.well-known which is also ugly and requires an extra http request, possible one that
the user has no access to.

Given that the specification of the string as a URI does not force dereferenceability of
the string, I think nothing is lost by adopting that notion. Servers that do not
wish to publish a public key at a URL would have done nothing wrong either if they 
published a relative URL. But they could also publish the key only for the agent that
had the private key, which would allow the client to verify in the future if the server
had the same view as it on what that key was meant to be - which I think would be 
pretty important for debugging in any case.

>> • Headers maintenance over transport:
>> 'If there are multiple instances of the same header field, all
>> header field values associated with the header field MUST be
>> concatenated, separated by a ASCII comma and an ASCII space `, `, and
>> used in the order in which they will appear in the transmitted HTTP
>> message.'
>> - what is the chance that proxies actually somehow reorder or add 
>> headers to a message sent?
> Proxies do stuff like this on a regular basis.

Ouch. But good to know.

>> Is there an RFC that actually states this should not be done?
> Not that I know of, no. Even if there was one, there is plenty of
> evidence that proxies rewrite headers.

Ok. I suppose HTTPS helps here. 
But even with HTTPs one has to consider that the browser also has a cache
of information, and and how its JS may change the order of the headers.

>> re JS libs: do they keep the order of headers or other such things. -
>> what are the headers that XMLHTTPRequests clients can actually
>> control? ( Is there perhaps a howto somewhere for these types of
>> issues ? ) - for POSTing, or PUTing of larger contents I suppose the
>> JS may not at all be in control of the number of chunks by which the
>> content is sent, so the Content-Length field may only be calculatable
>> very late in the game.
> In general, the way we've approached this problem is to tell people not
> to sign things that may be modified by proxies. You want the signature
> to fail if data is modified in transit. If all else fails, you can add
> your own header that merges the data you want to sign into an
> application-specific header. There are solutions out there to the
> "proxies ate my data" problem.

That makes sense.
I think an informational section in the spec that makes these points
would be useful.

What is your experience with JS and XMLHTTPRequests here? I imagine the
client does not even get access to the Date of the request here....

>> • reference to HTTP/2 incorrect
>> If the header field name is `(request-target)` then generate the 
>> header field value by concatenating the lowercased :method, an ASCII 
>> space, and the :path pseudo-headers (as specified in HTTP/2, Section 
>> [5]). ... [5] 
>> http://tools.ietf.org/html/rfc3447#section-8.2.1
>> but RFC3447  "Public-Key Cryptography Standards (PKCS) #1: RSA 
>> Cryptography" does not have anything about :method
>> I think the reference was meant to be to 
>> https://httpwg.github.io/specs/rfc7540.html " Request 
>> Pseudo-Header Fields"
> Hmm, the link was wrong, but didn't point to what you state above. In
> any case, I've updated the link to point to the latest RFC:
> https://github.com/web-payments/web-payments.org/commit/75231c102c7b92426d6183422190adcdcf5d0454


>> • Is there a test suite? I need to build one, but would be happy if
>> I could verify against another one.
> Unfortunately, no. That's been on our to-do list for a long time.
>> It would be worth having a web page that lists howtos and other uses 
>> that go beyond the RFC. It could also list such other protocols. 
>> Those don't seem to be correctly specified, but knowning that Amazon 
>> has something similar makes a pretty good case for it.
> The one from Amazon looks close to the HTTP Signatures spec because
> that's where it came from (originally). Quite a bit has been changed
> since then, but Mark Cavage (one of the co-authors on the spec), based
> it off of a lot of hard lessons he learned at Amazon wrt. signing HTTP
> messages.

excellent. It helps to understand the history of this.

> Thanks for the review of the spec, Henry, that was very helpful. :)

I'll write tests for my code next, then see how far I get with using this 
in the browser using WebCrypto and ServiceWorkers. I want to see if I can 
capture 401 Unauthorized requests there and then authenticate a client.

I'll get back with more info then.

> -- manu
> -- 
> Manu Sporny (skype: msporny, twitter: manusporny, G+: +Manu Sporny)
> Founder/CEO - Digital Bazaar, Inc.
> blog: Web Payments: The Architect, the Sage, and the Moral Voice
> https://manu.sporny.org/2015/payments-collaboration/
Received on Wednesday, 14 October 2015 10:48:14 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:05:59 UTC