- From: Martin J. Dürst <duerst@it.aoyama.ac.jp>
- Date: Thu, 14 Nov 2013 19:55:38 +0900
- To: Roberto Peon <grmocg@gmail.com>
- CC: Mark Nottingham <mnot@mnot.net>, Rob Trace <Rob.Trace@microsoft.com>, HTTP Working Group <ietf-http-wg@w3.org>, James M Snell <jasnell@gmail.com>, Michael Sweet <msweet@apple.com>, Tim Bray <tbray@textuality.com>, Tao Effect <contact@taoeffect.com>, Mike Belshe <mike@belshe.com>
On 2013/11/14 17:24, Roberto Peon wrote: > Ignoring improving aggregate security for a second (it would be beyond > foolish to do so longer than this paragraph), everyone seems to be > forgetting how poorly non http/1.1 actually works out there on the wild > wild internet. > > 10-20% failure rates are simply unacceptable for both site operators and > client writers. This is a very important point. But if HTTP 2.0 is going to be the big thing we think it is, shouldn't most of these problems be addressed in a rather short timeframe (maybe not weeks, but also not years)? If somebody wanted to argue for using HTTP 2.0 in the open (i.e. over port 80) as much as possible, couldn't they come up with a scheme that tries port 80, and falls back with some extremely lenient variant of TLS if port 80 doesn't work? (Please note that this is only a strawman, not actually something I'd advocate.) Regards, Martin.
Received on Thursday, 14 November 2013 10:56:42 UTC