Re: Please admit defeat (was: Our Schedule)

On 2014–05–26, at 3:15 PM, Yoav Nir <ynir.ietf@gmail.com> wrote:

> In the old days we had different protocols for different use cases. We had FTP and SSH and various protocols for RPC. Placing all our networking needs over HTTP was driven by the ubiquitous availability of HTTP stacks, and the need to circumvent firewalls. I don’t believe a single protocol can be optimal in all scenarios. So I believe we should work on the one where the pain is most obvious - the web - and avoid trying to solve everybody else’s problem. 
> 
> If HTTP/1.1 is not optimal for downloading 4 GB OS updates, let them create their own successor to HTTP/1.1, or use FTP, or use HTTP/2 even though it’s not optimal, just like they do today. You can’t optimize for everything at once.

Then those protocols will just be firewalled too. The user fundamentally needs an encrypted connection that doesn’t reveal what kind of data is being transmitted. Although, I suppose, any protocol or server that supports upgrade of an HTTP/1.1 connection will do.

Also, back in those days we had client applications for each protocol. Today we do everything through the browser, not only because it solves end-user portability issues, but because specialized client software is too vulnerable. If browsers end up with internal fragmentation due to multiple upgrade protocols, then security becomes that much harder. 

Just to point out, the Web makes due without fancy headers. A few kilobytes at the start of a stream will do, although compression is certainly nice.

Segmentation likewise might as well be a “flush” directive, as for improving web apps. Although the HTTP/2 semantics already seem ideal for this application, it’s not clear whether it is abusive for an application to use it where a flush is not specifically desired, nor whether flushing ever occurs.

Received on Monday, 26 May 2014 07:51:19 UTC