- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Wed, 11 Jul 2018 10:03:59 +1000
- To: Benjamin Schwartz <bemasc@google.com>
- Cc: Mirja Kühlewind <mirja.kuehlewind@tik.ee.ethz.ch>, HTTP Working Group <ietf-http-wg@w3.org>
On Wed, Jul 11, 2018 at 1:13 AM Ben Schwartz <bemasc@google.com> wrote: > HELIUM is intended to run over a congestion controlled "substrate" between client and proxy. This means there are two congestion control contexts: one between client and proxy, and one between the client and destination (through the proxy). As you know, when the client-proxy link is congested, this can lead to classic "TCP over TCP" performance problems. My understanding is that the client-proxy congestion control converts loss into delay, and this highly variable delay interferes with the inner context. It's possible that flow control is as much of a problem as congestion control. The end-to-end connection follows a longer path, and while it might be subject to two pinch points, only one of those is real in the sense that only one of them holds back the overall throughput. The problem arises when the proxy needs to buffer in reaction to congestion on the second part of either link; it then needs to exert back pressure using flow control, or drop. Dropping is easy in a sense (or marking - where you might decide an ECN analogue is needed), but that you might use flow control implies that you have buffering, which is a great way to destroy latency. We sort of pretend that we don't have these problems with CONNECT, and let implementations deal with the consequences. That's usually performance degradation of a kind. The question here is whether you care to grapple with the problems, and do what lengths you are prepared to go in doing that.
Received on Wednesday, 11 July 2018 00:04:35 UTC