Re: Subresource Integrity strawman.

On Wed, Jan 8, 2014 at 2:02 PM, Joel Weinberger <jww@chromium.org> wrote:
>
>  We want to encourage mixed-content? That seems exactly backwards to the
>> goal of this group. Most browsers block mixed content, and ones that
>> don't... should.
>>
> Sorry, I don't mean to imply that we want to encourage mixed content. My
> next comment about discouraging developers from moving to HTTPS (which does
> not include mixed-content) was meant to point out that I think that's an
> issue we need to grapple with here.
>
> In addition, we live in a world with mixed content, so the question is,
> can we make users more safe when they go to a mixed content site? That's
> part of the goal of this proposal. So there's definitely a balance to
> strike: encourage developers to go to full (non-mixed content) HTTPS, but
> improve the security of current users as well.
>

Perhaps I have a naive view of the world, but I'd rather see browsers
*raise* the barrier against mixed content and get sites to fix it on their
end... After all, integrity is only part of the larger security story. As a
visitor, I don't want you leaking my visit information through mixed
content side-channels.


>  This does bring up the legitimate fear of *discouraging* developers from
>> moving to HTTPS. "Why should I use HTTPS when I can just specify
>> integrities?" I think this is a real concern, and personally, I want to
>> make sure that we're providing other incentives for developers to move to
>> HTTPS. But at the same time, we really owe it to users to make the Web as
>> safe as possible right now, too.
>>
>> There is a lot of (recent) discussion on http-wg on how to deploy HTTP/2
>> + TLS and various approaches to lower barriers to TLS adoption. FWIW,
>> instead of trying to graft integrity onto HTTP, I'd rather see focus on
>> encouraging developers adopt HTTP/2 + TLS + HSTS. Speaking of incentives,
>> combining performance + better security story is a clear win.
>>
> I don't believe this is a "choose one or the other" situation. I think we
> would all be much happier in an all-TLS world, so we should work on making
> that happen. We should also provide lower-barrier tools, where possible,
> that increase the security of users.
>
> This isn't just about security, though. Part of the goal, at least for
> now, is to improve caching as well. So even if we live in an all HTTPS
> world, which provides integrity, there's still no way a priori to identify
> what resource you are attempting to load. The integrity attribute is still
> useful as a hint about the content you're requesting as well.
>

The UA can be smart enough to automatically fingerprint fetched resource
and dedupe / story a single copy of it in its cache. Where the client-side
integrity hash can help is in determining whether the request should be
made at all.. so to me, this is more of a latency optimization. And on that
note, perhaps this is better solved at a transport layer? With SPDY /
HTTP/2 the server can push the headers + fingerprints of the associated
resources and the client can use that data to determine if it wants to
accept the resource or if it already has it in cache. The benefit here is
that this is completely automated between client and server -- no need to
modify your markup, insert manual hashes, and so on.

  Further, it seems like in practice the proposed example wouldn't actually
>> fly:
>>
>   *<script src="https://analytics-r-us.com/include.js
>>>> <https://analytics-r-us.com/include.js>"*
>>>> *        integrity="ni:///sha-256;SDfwewFAE...wefjijfE"></script>*
>>>>
>>>> The whole point of providing a generic "ga.js" or "include.js" is that
>>>> it can be revved by a third party - e.g. updates and security fix
>>>> deploys... If I add an integrity tag on these resources, I effectively
>>>> guarantee that my site is broken next time analytics-r-us.com revs
>>>> their JavaScript. Once again, it seems like if you must have this control,
>>>> you're better off freezing a local copy on your server, auditing it, and
>>>> being responsible for updating it manually.
>>>>
>>> I view this as a feature. Developers should be aware of the content that
>>> they are loading, even if it's from a third-party. But if this problem does
>>> arise, the temporary solution is in the "fallback" portion of the proposal,
>>> which is still very much up in the air.
>>>
>>
>> Fallback to what? An old copy? What if its purged from cache? ... My
>> point is, this proposal doesn't work for majority of cases (3rd party
>> widgets), specifically because the resources served by those providers *do
>> change* and often quiet frequently. If that's an issue for you, then freeze
>> it on a local server.
>>
> That's a fair point, and it makes it doubtful that you'd use integrity for
> those types of widgets. But there are plenty of instances of loading
> content from CDNs, images from a third-party site, or other content that
> the developer simply wants to ensure is a specific type of content. I'm not
> sure about the "majority of cases" claim, though. This is probably
> something we can measure, and we probably should.
>

So, if I'm following the logic correctly the use-case is something like the
following:

As a developer I want to make sure that the loaded resource is the *exact*
version that I specified. To achieve this, I could either freeze the
resource on my own server, which guarantees that the resource is fully
under my control and cannot be changed or updated without my permission...
Or, I may want to use a third-party service to host this resource (e.g. a
CDN), but I don't (entirely) trust the third party and want to make sure
they don't swap the content on me, so to guard against that I'm going to
specify an integrity hash in the markup.

Does that sound about right?

ig

Received on Wednesday, 8 January 2014 22:44:51 UTC