- From: Willy Tarreau <w@1wt.eu>
- Date: Thu, 30 Apr 2015 00:25:33 +0200
- To: "henry.story@bblfish.net" <henry.story@bblfish.net>
- Cc: Michael Sweet <msweet@apple.com>, Eric Covener <covener@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Thu, Apr 30, 2015 at 12:18:52AM +0200, henry.story@bblfish.net wrote: > > > On 29 Apr 2015, at 19:08, Willy Tarreau <w@1wt.eu> wrote: > > > >> > >> Again defining something that is undefined does not break a standard. > >> The latest spec says that GET can have Content-Length header. Presumably > >> the people allowing this had their reasons. Perhaps this is it. > > > > The first reason is to simplify the implementations : when you see that > > you have no reason for doing a special case of GET when you already support > > POST, PUT and any other method supporting a body, you ensure that it works > > just like the other ones. Conversely, when you don't expect to need a body, > > you simply don't implement this at all and whatever follows the headers is > > processed in the easiest way for your implementation. I used to see a NAS > > segfault when receiving a body after a GET request or even pipelined GET > > requests, I've seen products block the body (and wrote one which used to), > > others will silently drop it. In fact you're very lucky when you get an > > error because you can react quickly. All other cases are painful because > > users say "it doesn't respond, even after one minute" and nobody knows > > where to look at. > > It would not matter if servers blocked or dropped the body, since the client > would still get the full response and this proposal is that if it understood > the body it would just send partial content. This is no different from a client > asking for Partial Content and receiving instead the full content in one go. I'm not speaking about servers blocking the body, but intermediaries, leading to servers not receiving the body they're waiting for prior to respond. > Also the fact that you had to fix software that failed when receiving bodies > in GET means that you were not the only one to do that, so that most software > actually can withstand these now, or would have gone out of buisness from this. The problem is that it's easy to fix software in two places : - client side : users complain, they're directed to an update and that's done - server side : developers want their application there and request a server update to be compatible with the new feature. And in the middle (ie: from bogus browser plugins and MITM anti-virus to transparent proxies, CDNs, server-side SSL-offloaders, caches and load balancers), that's a completely different story because these components are shared between many users/customers, they work very well for everyone except your extension so there's no compelling reason for their owners to request a fix and risk their update on a upgrade. Willy
Received on Wednesday, 29 April 2015 22:26:02 UTC