W3C home > Mailing lists > Public > public-webappsec@w3.org > November 2015

Re: HSTS Priming, continued.

From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
Date: Wed, 11 Nov 2015 19:46:04 -0500
To: Martin Thomson <martin.thomson@gmail.com>, Eric Mill <eric@konklone.com>
Cc: Brian Smith <brian@briansmith.org>, Crispin Cowan <crispin@microsoft.com>, Brad Hill <hillbrad@gmail.com>, Mike West <mkwst@google.com>, "public-webappsec\@w3.org" <public-webappsec@w3.org>, Richard Barnes <rbarnes@mozilla.com>, Jeff Hodges <jeff.hodges@paypal.com>, Anne van Kesteren <annevk@annevk.nl>, Adam Langley <agl@google.com>
Message-ID: <87y4e4niyb.fsf@alice.fifthhorseman.net>
I agree with Martin that for content that would otherwise be blocked, we
don't care about the latency.

On Wed 2015-11-11 19:34:00 -0500, Martin Thomson wrote:
> If I have this right, your main concern is with the potential delay in
> loading passive mixed content.  If we have to wait for a timeout
> before falling back, that's pretty unpleasant.  However, I think that
> at some point in the future, we may want to take that hit.  Given how
> much mixed content there is at the moment, that might not be *right
> now*.

Alternately, browsers adopting this strategy where UA policy allows a
fail-through to cleartext (because "passive" content) could just use a
shorter timeout than normal before triggering a fail-through, right?

One concern here is that this might touch too many layers to be
feasible, since timeouts might be in any or all of TCP session
establishment, TLS handshake, HTTP response.

But the browser could presumably just set a timer and say "if this
request isn't back and done in K milliseconds, we're going to abort it
and kick off an http request".

    --dkg
Received on Thursday, 12 November 2015 00:46:57 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:16 UTC