W3C home > Mailing lists > Public > public-webappsec@w3.org > January 2014

Re: Subresource Integrity strawman.

From: Joel Weinberger <jww@chromium.org>
Date: Wed, 8 Jan 2014 12:07:01 -0800
Message-ID: <CAHQV2Kne+SX9mDU0Mwts7a=O4H=6riom9_=uvwFFiFyY6_YTEg@mail.gmail.com>
To: Ilya Grigorik <igrigorik@google.com>
Cc: Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Devdatta Akhawe <dev.akhawe@gmail.com>, Frederik Braun <fbraun@mozilla.com>, Brad Hill <bhill@paypal.com>, Anne van Kesteren <annevk@annevk.nl>, Mark Nottingham <mnot@mnot.net>, Tab Atkins <tabatkins@google.com>
On Wed, Jan 8, 2014 at 10:52 AM, Ilya Grigorik <igrigorik@google.com> wrote:

> Hey all. First off, I wouldn't qualify myself as a "security" person, so
> pardon my ignorance... A few high-level questions:
>
> Authors must trust that the resource their content delivery network
>> delivers is in fact the same resource they expect. If an attacker can trick
>> a user into downloading content from a different server (via DNS
>> poisioning, or other such means), the author has no recourse. Likewise, an
>> attacker who can replace the file on the CDN server has the ability to
>> inject arbitrary content.
>
>
> Isn't this redundant with HTTPS + HSTS? MITM aside, the transport layer
> guarantees data integrity and (pinned) authentication. In light of that,
> does this spec provide anything extra? It seems like it would be much
> simpler to simply recommend using HTTPS + HSTS.
>
It is certainly the case that, from a security perspective, using HTTPS +
HSTS is better. However, this proposal is meant to address the reality that
a lot of content is not being served over HTTPS, and doesn't appear likely
to in the near future. For example, many CDNs, for a variety of reasons,
serve content only over HTTP. Given this, we still want a way to provide
for the integrity of the content that reaches the user agent.

Additionally, even if your site is over HTTPS, if you don't have control
over the server that serves up some resource you rely on, it may be out of
your control that it's over HTTP, but you still want to provide integrity.
This allows you to do that.

This does bring up the legitimate fear of *discouraging* developers from
moving to HTTPS. "Why should I use HTTPS when I can just specify
integrities?" I think this is a real concern, and personally, I want to
make sure that we're providing other incentives for developers to move to
HTTPS. But at the same time, we really owe it to users to make the Web as
safe as possible right now, too.

>
> Provide authors with a mechanism of reducing the ambient authority of a
>> host (e.g. a content delivery network, or a social network that provides
>> widgets) from whom they wish to include JavaScript. Authors should be able
>> to grant authority to load a script, not any script, and compromise of the
>> third-party service should not automatically mean compromise of every site
>> which includes its scripts.
>
>
> Ok, to answer my own question: the extra bit of security is that you're
> effectively freezing a hash and if your pinned host is compromised and file
> is replaced, then UA will bail on execution - right? At which point, if you
> absolutely must have control over the target script, why not just freeze it
> on a local server? Some sites do that exactly that when deploying third
> party widgets / analytics / etc... some for security reasons, others for
> performance.
>
> Further, it seems like in practice the proposed example wouldn't actually
> fly:
> *<script src="https://analytics-r-us.com/include.js
> <https://analytics-r-us.com/include.js>"*
> *        integrity="ni:///sha-256;SDfwewFAE...wefjijfE"></script>*
>
> The whole point of providing a generic "ga.js" or "include.js" is that it
> can be revved by a third party - e.g. updates and security fix deploys...
> If I add an integrity tag on these resources, I effectively guarantee that
> my site is broken next time analytics-r-us.com revs their JavaScript.
> Once again, it seems like if you must have this control, you're better off
> freezing a local copy on your server, auditing it, and being responsible
> for updating it manually.
>
I view this as a feature. Developers should be aware of the content that
they are loading, even if it's from a third-party. But if this problem does
arise, the temporary solution is in the "fallback" portion of the proposal,
which is still very much up in the air.

>
> Long story short, it seems like this is an unnecessary layer if we assume
> HTTPS is place? Or, am I missing something obvious here? If we assume
> non-HTTPS world, then yeah we're talking about adding an integrity layer,
> but it seems rather clunky/complicated. I'd rather just push people to
> adopt HTTPS?
>
> ig
>
Received on Wednesday, 8 January 2014 20:07:29 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:04 UTC