- From: Alessandro Ghedini <alessandro@ghedini.me>
- Date: Tue, 12 Mar 2019 11:44:05 +0000
- To: Erik Nygren <erik+ietf@nygren.org>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>, Rich Salz <rsalz@akamai.com>, Brian Sniffen <bsniffen@akamai.com>, "Bishop, Mike" <mbishop@akamai.com>, Erik Nygren - Work <nygren@akamai.com>
Hello, On Mon, Mar 11, 2019 at 10:52:55PM -0400, Erik Nygren wrote: > This draft on "Best practices for TLS Downgrade" is intended as a starting > point for discussion on a topic that many people would like to ignore but > which introduces risk into the ecosystem. We'd like to bring some > co-authors onboard (especially from other CDNs and browsers/OSes) and > incorporate lessons learned elsewhere as well. While "don't downgrade!" is > almost always the "correct" solution, it isn't always viable. Getting > alignment on best practices may at least help provide better visibility > into the associated risks, such as by exposing to clients when an insecure > downgrade to cleartext is happening and by stripping request data most > likely leak private information. > > Feedback and suggestions for additions are most welcome, and we're also > interested in discussing more in Prague. Some of the recommendations seem very hard if not impossible to implement: 1. Don't downgrade X / Only downgrade Y: it just ends up breaking customers' websites and if CDNs were allowed to break customers' websites and wanted to reduce use of plain HTTP to origins they could just forbid HTTP altogether. It also becomes hard to debug problems caused by this. E.g. if customers look at say, browsers' developer tools and see a bunch of resouces that fail to be fetched, or an app that doesn't work properly, because the CDN decided to downgrade something but not something else. Also, how does the "not downgrade" work? Does the CDN return a specific status code? Which one? 2. Strip sensitive headers. Lots of websites (particularly APIs) implement custom authentication protocols. CDNs can't just guess what is sensitive and what is not. Not to mention that it potentially breaks customers' websites / mobile apps / whatever depends on those headers, and problems coming from that would be pretty hard to debug as well. I mean, the draft says so itself: "Mechanisms that rely on lists of what is allowed or what is banned rely on an implausibly detailed and up-to-date models of Web use", so it seems strange that this is a recommendation. 3. "Protocol-To-Origin: cleartext" header: this makes sense I think. I imagine browsers are unlikely to add a special error for this, so it would end-up being treated like plaintext HTTP requests, which might encourage people to try to fix this. Are there other clients that could use this? But to be able to do this the header should be standardized. Is a BCP enough for this? Until there's an actual standard header, it doesn't seem like it would be possible to implement. Cheers
Received on Tuesday, 12 March 2019 11:44:33 UTC