- From: Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com>
- Date: Thu, 29 May 2014 00:52:18 +0900
- To: David Krauss <potswa@gmail.com>
- Cc: Greg Wilkins <gregw@intalio.com>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAPyZ6=Jhazv==p94+RjhoD64GumoJqBC1_4+4mcL1v8nxuQR7w@mail.gmail.com>
On Thu, May 29, 2014 at 12:24 AM, David Krauss <potswa@gmail.com> wrote: > > On 2014–05–28, at 8:21 PM, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> > wrote: > > ENHANCE_YOU_CALM is suggested in the another thread. > > > The client sends an innocuous, small request to the origin via a proxy. > The origin sends too many headers back. So, the proxy tells the client to > ENHANCE_YOUR_CALM? > > I thought that it goes from proxy to origin. I agree that it is not appropriate to the client side. > I see you’re working on a reverse proxy. Can you please comment on my > other question: > > What if the header stream takes too long, without being particularly >>> large? Then the problem is not too many headers at all. Indeed, for a >>> reverse proxy with a slow client, this is sure to happen as sure as >>> streaming kicks in at all. >>> >>> >> This is not limited to HEADERS. Just send 1 byte of first header of any >> frame and pause. >> This is good candidate for the timeout, isn't it? >> >> >> I’m talking about the connection between the reverse proxy and the >> server. There should be only one connection there, representing all the >> clients assigned to the server. The reverse proxy is trusted, and it can >> always send complete data frames. However it is still at the mercy of the >> client as for complete header blocks, under a streaming policy. >> > > If nothing can be done about this, HTTP/2 reverse proxies would seem to be > bound to multi-connection topology with respect to the upstream servers. > Perhaps not a step backwards, but it’s still a compromise. > > Our current implementation buffers all incoming HEADERS in proxy and does not forward it in streaming manner. If if incoming headers are too large to buffer, we currently just emits RST_STREAM to both ends. I think we can do more elegantly, but this is the way I do ATM. For slow client/server, we have timeout for each connection, but no timeout for each stream at the moment. Does this answer your question? Best regards, Tatsuhiro Tsujikawa
Received on Wednesday, 28 May 2014 15:53:06 UTC