- From: Willy Tarreau <w@1wt.eu>
- Date: Fri, 4 Sep 2020 07:40:51 +0200
- To: Eric J Bowman <mellowmutt@zoho.com>
- Cc: Ietf Http Wg <ietf-http-wg@w3.org>
Hi, On Thu, Sep 03, 2020 at 10:13:13PM -0700, Eric J Bowman wrote: > Hi, I'm greenfield-coding a webserver, and wondering if I can just do away > with back-compat with HTTP/1.0. My concern is it's still alive and kicking on > intermediaries. Is there any empirical data on this? Opinions also > appreciated. It really depends what your target is. If you want to support browsers and nothing else, it's probably fine to simply fail on it. If you're expecting that applications built on top of your web server are compatible with various perf testing tools, monitoring scripts, availability tests, dirty in-house search engines, or just succeed in some compliance tests, it's probably better to still support it. In addition I'd suggest that once you've implemented HTTP/1.1, you'll note that implementing HTTP/1.0 requires no effort as it's only a subset of HTTP/1.1, so you'll just have to add a few "if" around some code blocks. In short, Connection defaults to close instead of keep-alive, neither chunked transfer-encoding nor 1xx responses are expected to be supported, and Host is optional. Thus not supporting 1.0 would make your 1.1 implementation look fairly suspicious about its ability to adapt to a client or server's 1.1 capabilities. The real difficulty with 1.0 is that most of the time such requests come from very low-quality clients (mostly scripts) that do not even look at the content-length header. What could probably be reasonable if you want to keep the effort very low is to always disable keep-alive in 1.0 so that you don't care how dirty the client implementation can be. But quite frankly the effort is very low. Looking through the whole haproxy code base, I'm only spotting 9 places where we take care of this version difference, both sides included, that's really cheap. Just my two cents, Willy
Received on Friday, 4 September 2020 05:41:10 UTC