W3C home > Mailing lists > Public > public-webappsec@w3.org > December 2014

Re: [SRI] unsupported hashes and invalid metadata

From: Devdatta Akhawe <dev.akhawe@gmail.com>
Date: Sat, 27 Dec 2014 20:49:22 -0800
Message-ID: <CAPfop_0fJwHwyTKzWhUUO4itcW0JSfFau9PPLJVuzrNqFUs-7Q@mail.gmail.com>
To: Mike West <mkwst@google.com>
Cc: Francois Marier <francois@mozilla.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
Imagine you are a web site owner and deploy SRI. 2 years from now, all
versions SHA currently supported are broken. Browsers have switched
over to supporting SHAwesome or whatever. But, since there is always
that random user who doesn't update. What do you want the website to

1. Send SHA-2, SHA-3 values for all integrity attributes everywhere.

This is not that great: since if sha-2, sha-3 have been broken, they
don't provide the security we want them to provide. and we have added
a chunk of processing and network overhead.

2. Use UA detection to only send headers to browsers known to support
latest hash.

But, the fork of Firefox with extra features doesn't get the
attributes; new BlinkBrowser  doesn't see these attributes and now has
to fake UA. IMO, any place where we are designing a feature expecting
people to use UA detection long term should raise red flags.

This is actually non-trivial for implementors on the web application
side. While the top 4-5 browsers take care of the vast vast majority
of users, there is a long tail that is not insignificant and is an
unnecessary pain. This is already painful for CSP deployment because
older versions of Firefox break on a host source with a star in it and
just fail close, breaking the site :(

Don't get me wrong, fail close is more obviously secure for the
current spec, but I am not sure it is a good long term bet, unless we
want to explicitly support versioning.

I am not sure this is convincing enough, but I definitely want to
think about this a bit and would be curious how other specs,
particularly security specs, handle this. While Postel's law might
suggest failing open, there is obviously some tension with security.
But, to actually achieve adoption (and thus security) maybe fail-open
is necessary.


On 26 December 2014 at 04:58, Mike West <mkwst@google.com> wrote:
> Without thinking about it too hard, I'd vote for merging the "fail closed"
> variant (https://github.com/w3c/webappsec/pull/120).
> We don't have a good way of distinguishing an awesome new hash function that
> we don't yet support, and a bad old hash function that we don't want to
> support without compiling a blacklist. I'd prefer not to do that.
> -mike
> --
> Mike West <mkwst@google.com>, @mikewest
> Google Germany GmbH, Dienerstrasse 12, 80331 München, Germany,
> Registergericht und -nummer: Hamburg, HRB 86891, Sitz der Gesellschaft:
> Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth Flores
> (Sorry; I'm legally required to add this exciting detail to emails. Bleh.)
> On Wed, Dec 24, 2014 at 2:45 AM, Francois Marier <francois@mozilla.com>
> wrote:
>> I've opened an issue around invalid metadata and unsupported hashes:
>>   https://github.com/w3c/webappsec/issues/119
>> as well as opened two pull requests for resolving the ambiguity:
>>   https://github.com/w3c/webappsec/pull/86
>>   https://github.com/w3c/webappsec/pull/120
>> The gist of the issue is what should we do with an integrity attribute
>> like:
>>   <script src="..." integrity="ni:///sha-1024;...">
>> Should it be ignored and the script loaded as with non-SRI enabled
>> browsers (as if the integrity attribute wasn't there)?
>> Or should it be ignored and cause the script to be blocked?
>> I can personally see arguments both ways, so I'm curious what others
>> think.
>> Francois
Received on Sunday, 28 December 2014 04:50:08 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:08 UTC