Re: [integrity]: latency tradeoffs

I tend to agree with Adam. Latency is really important and progessive
authentication is a good way to do that. Merkle trees seem like the
right construct, unless another browser vendor disagrees and wants
some other algorithm.

My only concern: I am not sure whether we want to make this a
requirement for the first version of the spec or make it a requirement
in the second version.

> With progressive authentication, any accepted chunk is authentic: it's
> the data that the hash in the source document intended. So it's fine
> to use it in any way - parsing, and executing Javascript for example.
>
> That does mean that an attacker can truncate the resource, but that's

This might even be a good thing. I have seen bad people include a
snippet of JavaScript at the bottom of a JS file. In such an attack,
the integrity check will just block at that line and the application
will continue working.


Thanks
Dev




On 15 January 2014 10:39, Joel Weinberger <jww@chromium.org> wrote:
>
>
>
> On Wed, Jan 15, 2014 at 7:21 AM, Adam Langley <agl@google.com> wrote:
>>
>> On Wed, Jan 15, 2014 at 4:16 AM, Mike West <mkwst@google.com> wrote:
>> > 1. Performance isn't the goal. Integrity is the goal.
>>
>> Not quite, right? If integrity were the only goal then HTTPS provides
>> that. I'm assuming that the desires motivating this are things like
>> minimising backbone traffic (i.e. ISP caching), improving response
>> times (i.e. using CDNs without fully trusting them).
>
>  At the very least, performance would be a wonderful side effect, which is
> what I view the entire caching portion about.
>
> I also think that the latency point is that if we do this wrong, the
> integrity check could provide a high latency cost for large files, so we
> should want to reduce the latency even if we don't care about improving
> performance over the status quo.
>>
>>
>> > 2. I think the performance benefits of integrity would be focused on
>> > cache.
>> > That is, the second load of a resource, regardless of its URL, could
>> > avoid
>> > hitting the network entirely if we already have a matching resource
>> > locally.
>> > For this case, we have the whole resource already, by definition.
>>
>> HTTPS resource can be cached equally well. If you're talking about
>> using the cache as a content-addressable storage then I think that's
>> too dangerous to allow. (As explored a little in "Origin confusion
>> attacks". Also, talk to abarth about this.)
>>
>> Even if we do assume CAS behaviour, I don't believe that the metrics
>> support this optimism. You should check with willchan and rvargas
>> about observed disk cache behaviours.
>>
>> > I think this is problematic in most (all?) cases, given the nature of
>> > the
>> > threat we're attempting to address. Trusting the resource to
>> > authenticate
>> > itself doesn't provide much benefit if we're not sure we can trust the
>> > resource in the first place.
>>
>> We're not trusting the resource to authenticate itself, but rather we
>> are spreading the authentication data throughout the download itself
>> in order to minimise processing latency. It's still the case that
>> everything is authenticated by the single hash in the (trusted) HTML
>> document.
>>
>> It's unclear whether the administrative overhead of doing this
>> outweighs the latency advantages at this point.
>>
>>
>> Cheers
>>
>> AGL
>>
>

Received on Wednesday, 15 January 2014 18:48:51 UTC