W3C home > Mailing lists > Public > public-webappsec@w3.org > November 2016

Re: Signed and indexed packaging proposal.

From: Martin Thomson <martin.thomson@gmail.com>
Date: Mon, 21 Nov 2016 10:25:55 +1100
Message-ID: <CABkgnnXYgUFOS1zkGKZySftnzVX3ccfaXm7qEum6h6PLU-9hhA@mail.gmail.com>
To: Dmitry Titov <dimich@google.com>
Cc: Martin Thomson <mt@mozilla.com>, "Michael[tm] Smith" <mike@w3.org>, Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Brad Hill <hillbrad@gmail.com>, yan zhu <yan@mit.edu>, Alex Russell <slightlyoff@google.com>
On 19 November 2016 at 15:10, Dmitry Titov <dimich@google.com> wrote:
> On Fri, Nov 18, 2016 at 2:22 AM Martin Thomson <mt@mozilla.com> wrote:
>>
>> On Fri, Nov 18, 2016 at 1:54 PM, Dmitry Titov <dimich@google.com> wrote:
>> > Sorry for broken links, github decided to block the repository, i pinged
>> > their customer support, it looks working now:
>> > https://github.com/dimich-g/webpackage/blob/master/README.md
>>
>> It seems like this is going to run afoul of a range of issues:
>>
>> 1. certificate keys are generally only used for signing TLS handshake
>> transcripts, this new usage would create a new usage model for those
>> keys that would need to avoid cross-domain signature reuse attacks
>
>
> Could you please share a pointer or explain a bit more what kind of attacks
> are those? Or perhaps you are making a more generic point about expanding
> the use of certificate keys and dangers associated with it?

Simple explanation: if you can control the signature input, then you
can create a signature that might be used in another context.  Imagine
that you were able to ask a server to sign a package that started with
something that looked like a TLS handshake.

>> 2. certificate validation in this context does not benefit from the
>> multitude of features provided by TLS, including things like OCSP
>> stapling (or online revocation checking, but no one does that).
>
>
> Direct OCSP can still be used to verify if a provided certificate is still
> valid. Once received, the OCSP may be cached by the browser. Also, it needs
> to be done once for all resources included in the package. As far as
> performance is concerned, there are probably opportunities to improve.

What about the offline entity that receives this package?  How does it use OCSP?

I am assuming here that the value in this over just long cache
lifetimes is that you can move the package from the host that acquired
it to other hosts.

>> 3. similar to above, how are certificates acquired?
>
> Perhaps a bit naively, we'd think about the same (similar) certificates that
> servers would normally use for TLS. Or similar. Since certificate is pretty
> much a key signed by CA, there could be many ways to obtain one, by the
> proper owner. Only the publisher (original owner of a certificate for the
> given domain) could produce a valid signature on a package, and they already
> supposedly secure their TLS.

As above.

>> 4. what if certificates expire?
>
> There are several FAQs at the end of Explainer trying to address that. But
> since it's a browser that opens a package, it woudl rely on regular
> mechanisms like OCSP to validate the package. It may not be possible when
> web is unreachable - but supposedly the risks of cross-domain attacks is
> also lower when there is no connectivity. Once connectivity returns, the
> package cart may be re-validated. While it is a bit sketchy at the moment,
> it doesn't' look like there are obvious huge issues with this... Unless we
> miss something of course :)

As above.

>> 5. the content is obviously a snapshot in time, which implies that the
>> entity constructing the package would need to be responsible for
>> ensuring consistency; however, this might interact poorly with caching
>> of those resources if there are newer versions in the browser cache.
>> Given that these are likely not fresh in the HTTP caching sense, the
>> browser will need special overrides for freshness checks on certain
>> resources.
>
>
> Awesome point! There is [very early] thinking about implementing it as a
> 'cache shadow', which would imply those freshness checks. It might make
> sense to also expose this 'level of cache' to things like cache API.
>
>>
>>
>> 6. if these resources overwrite entries, then the cache could be
>> polluted by old content, which might be exploited by an attacker
>
>
> In general, old entries should not overwrite newer ones. However, even if
> they don't, there can be a consistency mismatch between those fresher
> resources from the real cache vs the older resources in the package. I
> suspect the package, if indeed implemented as a 'shadow cache layer', may
> need to rely on Cache-Control headers on resources included in the
> package...

Hmm, that suggests another problem.  HTTP responses don't exist in a
vacuum.  Where are the requests that triggered these responses?
Content negotiation is probably the big unanswered question.

That suggests a further problem, which is that responses are commonly
(even overwhelmingly) customized to a requester.  This needs a
reminder that content would have to be purely static, as opposed to
customized.  Otherwise, there might be privacy-sensitive information
contained in the package, or references to a user's private state,
etc...

>From the FAQ:
> What happens if cid: URLs collide?

Make a new one?  You don't need all the complexity of UUID here.  As
long as the cid is unique within a package it is OK.
Received on Sunday, 20 November 2016 23:26:29 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:21 UTC