- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Wed, 11 Nov 2015 12:09:43 -0800
- To: Brian Smith <brian@briansmith.org>
- Cc: Crispin Cowan <crispin@microsoft.com>, Brad Hill <hillbrad@gmail.com>, Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Richard Barnes <rbarnes@mozilla.com>, Jeff Hodges <jeff.hodges@paypal.com>, Anne van Kesteren <annevk@annevk.nl>, Adam Langley <agl@google.com>
A concern that has come up in the past is the fact that this would generate extra - and perhaps unwanted - load on servers. I don't give that much credibility, but it has been why we couldn't probe in the past. On 11 November 2015 at 11:38, Brian Smith <brian@briansmith.org> wrote: > Crispin Cowan <crispin@microsoft.com> wrote: >> >> Dumb/newbie question: wouldn’t HTTPS upgrades be easy if only client >> browsers tried HTTPS first for every resource? Then fail back to HTTP if >> policy allows, or block if policy disallows mixed content. > > > I agree that this sounds better to me. In particular, before doing a > mixed-content subresource load, first try the subresource load over > https://. If the response has the HSTS header then you are golden. > Otherwise, if the response is a 2xx without HSTS (but with the expected > content-type--no sniffing), then it's probably better to just use the HTTPS > response anyway; it might be the wrong response, but it's probably not going > to be much worse than the lack of a response that mixed content blocking > causes. Otherwise, if it is <img>, <video>, <audio>, continue on with the > mixed content load if you feel like it. > > K.I.S.S. > > Cheers, > Brian > -- > https://briansmith.org/ >
Received on Wednesday, 11 November 2015 20:10:11 UTC