- From: Willy Tarreau <w@1wt.eu>
- Date: Tue, 28 Feb 2017 14:32:03 +0100
- To: Patrick McManus <mcmanus@ducksong.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>, Alex Rousskov <rousskov@measurement-factory.com>
Hi Patrick! On Tue, Feb 28, 2017 at 04:55:35AM -0800, Patrick McManus wrote: > The hard part is whether it addresses the fundamental problem or not - and > sometimes you can't assess that right away (although that's a reason we try > and gauge implementor interest before officially taking on new work). In > this case, if the problem is one of clarity and an unconstrained vocabulary > then maybe we're onto something. If the problem is more that a User Agent > thinks it should be emphasizing two party instead of three party > communication (as I suggest upthread) then this working group is unlikely > to be the forum where that fundamental stalemate is broken until something > about the market conditions shift. I think you could make arguments for the > market having shifted simultaneously in contradicting directions already > and that might have ramifications for HTTP interop. That's a very interesting way to describe the problem in my opinion. In fact if we summarize : 1. user agents focus on more control of the end-to-end confidentiality and integrity and there's no way we'll go backwards and possibly very few people will argue against the benefits ; 2. some places need to enforce some control on how their internet access is used. Their motivations are outside the scope of this WG, some of them will do it for what we'll call undesired state surveillance, others will to it to limit malware propagation or protect against information leaks, others will do it for legal/moral reasons (protect children against unsuitable contents) and others will do it as a way to limit the waste of their precious shared resource that their bandwidth is. 3. likely everyone agrees that user-friendliness is very important, to report connection failures, reasons for access being blocked, or even the safety and privacy risks of pursuing browsing on a given site (due to the proxy's inspection or to the connection being unsafe between the proxy and the origin). 4. by combining to these three contracticting goals, we've created a new standard solution which is MITM decryption. It's become so common that when I speak to people selling products to enterprises, it seems to even be the de-facto standard way to deploy a proxy for them! This totally ruins the first goal above! 5. and now that point #1 is out of the picture and point #2 is under control again, and point #3 is best covered by the MiTM solution. So... let's see the picture the other way around now. OK, some places are only willing to offer a free internet access in exchange for a little bit of control. If we provide the ability for the user to control what and when he's controlled and to know whether he's safe or not, we can have both #1 and #3 back without the need for the ugly #4 which only satisfies the less defendable #2 but which has become standard. That's why I think that a technical solution like trusted proxies can be good. Let the user know whether he can have a TLS tunnel to the origin or has to retry in clear using "GET https://". Let's have the same warnings as when I'm asked whether or not I want to share my location with the site. If we end up with the ability for admins to think "all this should pass without any analysis otherwise my users will come talk to me with a basebal bat" then we can re-establish some reasonable balance where everyone finds their own benefit. I think it's sad that in 2017 browsing from an enterprise on a site using RSA2048 and AES128 is less safe than when I used to connect to my bank in 1998 using RSA512 and DES40! At least by then I didn't have to trust my admin. Just my 2 cents, Willy
Received on Tuesday, 28 February 2017 13:32:58 UTC