Re: HTTP 2.0 mandatory security vs. Amateur Radio

On Nov 15, 2013, at 10:06 PM, Nicolas Mailhot <nicolas.mailhot@laposte.net> wrote:

> 
> Le Ven 15 novembre 2013 18:31, Roberto Peon a écrit :
> 
>> That leaves us with either using a new port (infeasible, failure rate
>> still
>> in high 10%s) or doing something else so as to be able to deploy.
>> What is the something else would you suggest?
> 
> I'm not convinced at all using a new port is infeasible. Yes, it would be
> devilishly hard for a new corner-case protocol. But the web as a whole is
> something else entirely and a new way to access it won't be dismissed so
> easily. Of course adoption would take years, you don't replace billions of
> web equipments that easily, but it would happen anyway.

I'd like to give the firewall view of this. Suppose we get IANA to assign port 100 for HTTP/2. Right now, firewall policies fall into two broad categories:

 1. The lenient, where outbound connections are always permitted, and
 2. The strict, where we don't let anything go either way if we don't understand it. 

Under the lenient policy, HTTP/2 on port 100 would work. Under the strict policy, it won't. I'm sure browsers will do something happy eyeballs-like, and quickly determine that port 80 works and port 100 doesn't and will use the appropriate one, along with the HTTP version that goes with it. So far, so good.

The problem with this, is that the firewalls have some policy as to what is and isn't allowed in HTTP (this is what is marketed as "next generation firewall"), so as soon as web stuff starts going through port 100, administrators will
 1. block port 100, and 
 2. demand that vendors do their inspection on port 100 as well.

The best case scenario, is that firewall vendors quickly create HTTP/2 parsing code, push it into the next versions of their software (known as firmware for some vendors), and then port 100 can be opened as part of "the web" with all the inspection that is done on port 80 today. That drives firewall customers to upgrade, which will make firewall vendors happy to add the support.

The not-quite-best case is where firewall vendors push out an incomplete HTTP/2 parser based on some wild assumptions about what servers and clients will do. That can lead to some strange failure modes. BTW: the same can be said of browser code and server code and caching proxy code, and you'll have a hard time debugging which part of the system is at fault. Luckily, going to port 100 allows the administrator to easily block the port, and push everyone back to HTTP/1.

By having HTTP/2 on port 80 along with port 100, you run a bigger risk of strange results. Connections may succeed, but then firewalls that fail to recognize the upgrade dance will do strange things to the connection once strangeness happens. This is somewhat mitigated by the fact that websockets happened several years ago, so any fairly recent firewalls do recognize (and are able to block) the upgrade dance, so I think the failure rates will not be quite as large as they were a few years ago when websockets first appeared.

On balance, I think deploying HTTP/2 on a new port, and not doing surprising things on port 80 (other than maybe Mark's proposed header saying that HTTP/2 is available on port 100) would be a good thing. Adding opportunistic encryption there (but not on port 80) would also be a good thing, but it's orthogonal to the primary question of how to deploy HTTP/2 in http:// URLs.

Yoav

Received on Sunday, 17 November 2013 11:05:11 UTC