Re: Please admit defeat

On 26/05/2014 7:50 p.m., David Krauss wrote:
> 
> On 2014–05–26, at 3:15 PM, Yoav Nir wrote:
> 
>> In the old days we had different protocols for different use cases.
>> We had FTP and SSH and various protocols for RPC. Placing all our
>> networking needs over HTTP was driven by the ubiquitous
>> availability of HTTP stacks, and the need to circumvent firewalls.
>> I don’t believe a single protocol can be optimal in all scenarios.
>> So I believe we should work on the one where the pain is most
>> obvious - the web - and avoid trying to solve everybody else’s
>> problem.
>> 
>> If HTTP/1.1 is not optimal for downloading 4 GB OS updates, let
>> them create their own successor to HTTP/1.1, or use FTP, or use
>> HTTP/2 even though it’s not optimal, just like they do today. You
>> can’t optimize for everything at once.
> 
> Then those protocols will just be firewalled too. The user
> fundamentally needs an encrypted connection that doesn’t reveal what
> kind of data is being transmitted. Although, I suppose, any protocol
> or server that supports upgrade of an HTTP/1.1 connection will do.

The system over which that traffic travels needs a protocol that it can
trust. Trust of the user or trust of what data the protocol contains -
either one will do, both together would be better.

HTTP has held onto that trust in a small way by being open and routinely
inspected. Is now on the fast track to also losing that trust completely.


> 
> Also, back in those days we had client applications for each
> protocol. Today we do everything through the browser, not only
> because it solves end-user portability issues, but because
> specialized client software is too vulnerable. If browsers end up
> with internal fragmentation due to multiple upgrade protocols, then
> security becomes that much harder.
> 


A decade or two ago firewalls were being rolled out everywhere ASAP to
filter and take control of the packets and protocols being used in those
unprotected UDP/TCP ports 0 through 65535.

Today a different type of MITM are being rolled out ASAP to filter and
take back control of the protocols being run via port 80 and 443 on top
of HTTP and TLS.

Consider carefully how it got to the point where firewalls have such a
"bad" reputation that port based protocols are now designed as if the
firewall were an enemy? What does that say about how HTTP will be seen
and operating tomorrow?

We can already have signs; in the X-Powered-By header created explicitly
to bypass *web server* control mechanisms, in popularity of MITM
interception proxies, in popularity of SSL decryption proxies, ...

Working against security systems in a technical arms race only harms us all.

If the protocol has real benefits and shows that it can be trusted the
firewalls etc can be opened where relevant. Design a protocol that can
be trusted, not one that implicitly shows malicious intent by explicitly
avoiding peoples choice of security.



> Just to point out, the Web makes due without fancy headers. A few
> kilobytes at the start of a stream will do, although compression is
> certainly nice.
> 

Nothing has changed there. The overay application protocols which do not
use headers are better off with HTTP/2 even as they do not need much
complexity of HEADERS and none of CONTINUATION.

> Segmentation likewise might as well be a “flush” directive, as for
> improving web apps. Although the HTTP/2 semantics already seem ideal
> for this application, it’s not clear whether it is abusive for an
> application to use it where a flush is not specifically desired, nor
> whether flushing ever occurs.

Amos

Received on Monday, 26 May 2014 10:39:22 UTC