- From: Roberto Peon <grmocg@gmail.com>
- Date: Tue, 1 Jul 2014 21:00:16 -0700
- To: Jesse Wilson <jesse@swank.ca>
- Cc: Willy Tarreau <w@1wt.eu>, Zhong Yu <zhong.j.yu@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAP+FsNcv41CiwU+jBR184-XQWHAsj8acYzZ2SNQ7XP+k6MqiNQ@mail.gmail.com>
On Tue, Jul 1, 2014 at 8:59 PM, Roberto Peon <grmocg@gmail.com> wrote: > I'd expect that OkHttp sends WINDOW_UPDATE for the connection-level flow > control, but doesn't send WINDOW_UPDATE for the receiving stream until the > application has consumed it. > If it works this way, then there is no deadlock. > > If the implementation needs to be stingy with memory, then it should set a > small or zero default window size, and send a WINDOW_UPDATE at the end of > the request, allowing the server to respond then. This would force > half-duplex behavior on response entity-bodies for that stream. > .. and in such a case, again, there is no deadlock. -=R > > -=R > > > On Tue, Jul 1, 2014 at 8:47 PM, Jesse Wilson <jesse@swank.ca> wrote: > >> Although OkHttp's network layer is constantly reading from the socket, it >> won't acknowledge response body data until it's been consumed by the >> application layer. And the application layer won't consume the response >> until after it's done transmitting the request. >> >> So we're vulnerable to deadlock because our application layer is not >> concurrent and our network layer refuses to buffer an unbounded amount of >> data. >> On Jul 1, 2014 9:07 PM, "Roberto Peon" <grmocg@gmail.com> wrote: >> >>> In HTTP2, however, it works differently since the browser must always >>> read, and both sides must respect flow control. >>> >>> You need to try pretty hard to get it into a pathological case that >>> deadlocks things (e.g. overly-large/infinite/non-existent flow control >>> window which the application is unable/unwilling to actually adhere to >>> coupled with more data sent than the application is willing to read). >>> >>> For my part, I would not change how the server works. >>> I'd have the server drop the connection to any endpoint for HTTP2 that >>> was not reading what the server was sending it. >>> Similarly, any client should drop the connection to any endpoint that >>> was not reading what it was sending it. >>> >>> -=R >>> >>> >>> On Tue, Jul 1, 2014 at 8:02 PM, Zhong Yu <zhong.j.yu@gmail.com> wrote: >>> >>>> On Tue, Jul 1, 2014 at 2:42 PM, Willy Tarreau <w@1wt.eu> wrote: >>>> > On Tue, Jul 01, 2014 at 02:21:07PM -0500, Zhong Yu wrote: >>>> >> On Tue, Jul 1, 2014 at 11:45 AM, Roberto Peon <grmocg@gmail.com> >>>> wrote: >>>> >> > Getting a response before the request has finished definitely >>>> happens >>>> >> > sometimes, even in HTTP/1.1 >>>> >> >>>> >> A server should not do that, or it will cause deadlocks with most >>>> >> major browsers. >>>> >> >>>> >> 100-continue is supposed to be helpful in this case, but it's not >>>> >> really adopted in practice. >>>> > >>>> > I disagree, and there are a number of situations where it's quite >>>> desirable >>>> > to act like this. For example, imagine that I'm uploading a large >>>> image to >>>> > a site and my session has expired. I want the site to send the error >>>> as soon >>>> > as possible so that my browser stops emitting for nothing. I don't >>>> want it to >>>> > wait minutes just to know that I need to re-login first then try >>>> again. >>>> > >>>> > Browsers already handle this quite well in 1.1, and the real issue in >>>> fact >>>> >>>> All the browsers I tested (firefox/chrom/safari/IE) appear to be >>>> half-duplex - they will not read the response until the request body >>>> is completely sent. A server can send an immediate response before >>>> reading the request body, but the browser won't read the response >>>> immediately. >>>> >>>> Since sending the response before draining the request body carries >>>> the risk of deadlock, it's probably better to drain the request body >>>> before sending the response. That is, the server is forced to do >>>> half-duplex, because most clients do half-duplex. >>>> >>>> Zhong Yu >>>> bayou.io >>>> >>>> > tends to be on the server side where it's not always easy to drain >>>> all the >>>> > request from the client after the response was sent, which sometimes >>>> results >>>> > in a TCP RST which risks to clear the response before the client has >>>> a chance >>>> > to see it. But correctly done, it's a very useful feature. >>>> > >>>> > Willy >>>> > >>>> >>> >>> >
Received on Wednesday, 2 July 2014 04:00:43 UTC