- From: Willy Tarreau <w@1wt.eu>
- Date: Tue, 26 Feb 2013 05:02:45 +0100
- To: Mark Nottingham <mnot@mnot.net>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Tue, Feb 26, 2013 at 02:57:48PM +1100, Mark Nottingham wrote: > > On 26/02/2013, at 2:54 PM, Willy Tarreau <w@1wt.eu> wrote: > > > Hi Mark, > > > > On Tue, Feb 26, 2013 at 11:56:09AM +1100, Mark Nottingham wrote: > >> > >> On 22/02/2013, at 6:02 PM, Willy Tarreau <w@1wt.eu> wrote: > >> > >>> I'm still having a problem with the principle behind 2b : when you > >>> pass through transparent intercepting proxies, by definition you're > >>> not aware of it. So even if 2a worked for the first connection, it > >>> does not preclude that 2b will work for the second one. Nor the DNS > >>> will BTW. > >> > >> Sorry, I wasn't clear; that would be for cases where you had a high degree of > >> confidence that not only was HTTP/2.0 able to be spoken, but where you have > >> an even higher degree of confidence that HTTP/1.x is NOT; e.g., a separate > >> port (that you might have discovered through DNS, for example). > > > > Then if that's to be used on a different port, we probably don't need > > to check how servers respond to this magic on port 80. > > My motivation is to fail on misconfiguration (e.g., telling Apache to listen > on the wrong port, forwarding to a back-end that doesn't speak 2.0), and to > clearly identify the protocol being spoken. > > To me, it's a bonus if sending the magic helps fail the upgrade early, but I > don't find the multiple code paths terribly convincing; we're talking about > emitting a handful of bytes on the client side, and checking a handful on the > server side. OK, then I agree with the principle of finding something that *most often* fails cleanly even if that is not *always* the case. Cheers, Willy
Received on Tuesday, 26 February 2013 04:03:16 UTC