- From: David Kendal <dpkendal@dpk.org.uk>
- Date: Wed, 24 Oct 2012 04:31:24 +0100
- To: James M Snell <jasnell@gmail.com>
- Cc: Mark Nottingham <mnot@mnot.net>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
On 23 Oct 2012, at 17:50, James M Snell <jasnell@gmail.com> wrote: > Alternatively, I firmly believe that a http2 URI scheme as a reasonable > fallback needs to be considered as an option here... particularly if we do > define a dedicated http/2 default port. Bear in mind that if this happens HTTP would become the only (well- known, at least) many-to-many (ie. any client should be able to talk to any server talking any version of the protocol, for public access by strangers -- with eg. IMAP that is not a problem because it's not so unreasonable to ask people to switch mail client to be able to retrieve mail from their *own* server) protocol to be specified where different versions exist on different ports and have different URI schemes. This would significantly slow deployment. It also breaks the meaning of URIs: consider the number of existing http:// links published already. If someone clicks one of those in a new HTTP/2.0 era, we have two choices: we can either just use HTTP/1.x, which will force servers to support that protocol forever, which could involve leaving another port open, and it will also probably neutralise the speed benefits of HTTP/2.0 for some of those links (I'll bet that the cost of Upgrade/101 negotiation turns out to be more than the savings of HTTP/2.0; I concede that for any resources linked to that page will likely get the benefits of 2); or we can break the meaning of the URI, try an HTTP/2.0 request first and if that fails, degrade to HTTP/1.1. In the latter case, why bother maintaining the http URI scheme for years if browsers will just treat it more-or-less like http2? My preferred solution to this would be to ask clients to send an HTTP/2.0 request to every server, with a old-fashioned http URI, on port 80. If the server doesn't understand, it will either not respond -- just close the connection (some servers do this, I forget which) -- or send a response (quite possibly a 400 or 505, but it could be anything, even success) with an HTTP/1.x version indicated in the header. If that happens, the client should try again with an appropriate HTTP/1.1 version. It will be slightly slower to get things from HTTP/1.x servers than before, but no negotiation will be needed for HTTP/2.0. Obviously this isn't a very "pure" solution, in that it involves muddying the waters of HTTP versions. But it does get the job done. Now, to get this right, a few rules are needed: - an HTTP/2.0 client MUST retry with an HTTP/1.x request if the response from the server has a non-HTTP/2.x version in the header, regardless of response code, including success. (This is to ensure that any new headers introduced in 2.0 which could change the resulting response somehow are properly respected by the server.) It should close the connection to the server as soon as it receives the response-line. (This is to stop large files which might be included in the response from being downloaded, wasting bandwidth, before the retry.) - an HTTP/2.0 client MUST retry with an HTTP/1.1 request if the server closes the connection without responding. - an HTTP/x.y client (x > 2) MUST apply the same policy. - an HTTP/1.x server which is aware of the existence of HTTP/2.0 but does not yet support it SHOULD reply with a 505 status code. - an HTTP/2.x server MUST reply with an HTTP/2.x version number in the response-line. - an HTTP/2.x server SHOULD also support the HTTP/1.x line, but in case this is not desired or needed (for instance, in a proprietary system where the capabilities of all clients that will access the server are known) it SHOULD respond with a 505 status code. It SHOULD in all cases be able to give an appropriate 101 response to an HTTP/1.x client which sends an Upgrade header which indicates 2.0 support. > - James — dpk.
Received on Wednesday, 24 October 2012 03:31:59 UTC