Re: Large Frame Proposal

On Mon, Jul 7, 2014 at 2:21 PM, Willy Tarreau <w@1wt.eu> wrote:

> Hi Roberto,
>
> On Mon, Jul 07, 2014 at 01:21:26PM -0700, Roberto Peon wrote:
> > There is also the latency tradeoff, which for many usecases is the big
> > deal, and keeps being ignored.
>
> It's not ignored, what is said is that in *some* circumstances it's not
> the bigest deal and in such circumstances letting both sides agree on
> how to optimally use resources is better than preventing them at the
> protocol level. The example about the small NAS streaming to the TV is
> a reasonably valid one. Such boxes have a very low CPU and are optimized
> for sendfile() and will seriously suffer from copying data in small
> chunks. There's no latency impact here, however the user sees a smooth
> video.
>




>
> I strongly believe that for the sake of latency keeping small frames by
> default is the right thing to do and this proposal does so. It even allows
> to reduce them even more if needed. Hey, if you connect to my home server,
> you'll even see I negociate MSS to 512 to prevent outgoing HTTP traffic
> from affecting my SSH sessions too much, so don't underestimate my concerns
> with latency please :-)
>
>
I know-- I think I said somewhere that with the proposal, it would require
two parties before shots-in-the-foot started happening, which makes it at
least palatable, if not good tasting :)

Here is what I think about the various parts we've been discussing:

I could be convinced that stating the size of headers is useful. I don't
know if stating the uncompressed or compressed size is better. Regardless,
I don't think it should be a requirement. It is a nice hint, at best that
allows for slightly nicer buffer management under non-malicious
circumstances. It doesn't really change the DoS surface. Setting a limit on
the size of headers is something that I'd guess that we'd never use, since
it gives signals to attackers that allows them to optimize their attack
more than it gives us the ability to defend against the attackers. It might
have utility for embedded devices, though even in that case, I'd expect
that they'd do better to throw away keys/values that were uninteresting.
Meh.

I'd be perfectly happy to support having a setting that allows for up to
64k worth of data in a frame-- the framing mechanism would remain the same
and that'd take care of worries on my part about the base framing
mechanisms.
I see definite utility in this. It would still be possible for a poor
choice on the part of a gateway/proxy to mess up
priortization/multiplexing, but it'd be much less likely to happen, I'd
hope since a well implemented client plus a well implemented server would
prevent the proxy/gateway from at least some classes of poor implementation.



> Additionally, there is the DoS consideration, as mentioned before.
> > Since proxies are clients, and often even more constrained than servers,
> > requirements to buffer are potentially onerous, especially given that one
> > is not required to do this for HTTP today.
>
> That's not true Roberto, proxies are required to honnor the Connection
> header and to remove what is listed there, so in practice they cannot
> start to forward even the smallest header because they don't know if
> it will be tagged in a connection header later. And reverse-proxies
> need to perform a minimum of security checks and cannot pass unverified
> requests to servers (eg: with differing content-length headers). Also,
> there are still a bunch of servers which send you a reset if you send
> multi-packet requests (they're annoying to use with telnet), so that
> practice in HTTP/1 is even further limited.
>

Those servers (which cannot accept requests of size > 1.4k :) ) sound
rather broken.
Proxies are required to examine connection, but, practically, gateways are
not required to honor the connection header.

The kind of reverse proxy matters here-- some will certainly do all kinds
of checks, DoS analysis, etc. Those are likely to buffer the whole request.

Others, which are worried more about latency, (potentially because they're
already behind a gateway which did the DoS analysis, etc) will not--
they'll simply direct the data to the correct processing node.

In the response path, gateways can have significantly more leeway than the
proposed change (requiring buffering) would allow-- today I can know, by
virtue of the fact that I'm running both the gateway and the server, that
the server isn't going to do anything that can't be checked on a key-value
basis (as opposed to a full-headers basis), and requiring buffering for
each gateway on that path induces more latency than needs to be there, and
worsens the tradeoff of adding a loadbalancer (which is typically a latency
for utilization improvement (and 2nd order effects of that)) tradeoff.


-=R


>
> Cheers,
> Willy
>
>

Received on Monday, 7 July 2014 21:49:16 UTC