- From: Jeremy Orlow <jorlow@chromium.org>
- Date: Mon, 27 Jul 2009 14:02:28 -0700
On Mon, Jul 27, 2009 at 1:44 PM, Drew Wilson <atwilson at google.com> wrote: > > > On Mon, Jul 27, 2009 at 1:36 PM, Alexey Proskuryakov <ap at webkit.org>wrote: > >> >> 27.07.2009, ? 13:20, Jeremy Orlow ???????(?): >> >> I agree that this will help if the application sends data in burst mode, >>> but what if it just constantly sends more than the network can transmit? It >>> will never learn that it's misbehaving, and will just take more and more >>> memory. >>> >>> An example where adapting to network bandwidth is needed is of course >>> file uploading, but even if we dismiss it as a special case that can be >>> served with custom code, there's also e.g. captured video or audio that can >>> be downgraded in quality for slow connections. >>> >>> Maybe the right behavior is to buffer in user-space (like Maciej >>> explained) up until a limit (left up to the UA) and then anything beyond >>> that results in an exception. This seems like it'd handle bursty >>> communication and would keep the failure model simple. >>> >> >> >> This sounds like the best approach to me. >> >> >> 27.07.2009, ? 13:27, Drew Wilson ???????(?): >> >> I would suggest that the solution to this situation is an appropriate >>> application-level protocol (i.e. acks) to allow the application to have no >>> more than (say) 1MB of data outstanding. >>> >>> I'm just afraid that we're burdening the API to handle degenerative cases >>> that the vast majority of users won't encounter. Specifying in the API that >>> any arbitrary send() invocation could throw some kind of "retry exception" >>> or return some kind of error code is really really cumbersome. >>> >> >> Having a send() that doesn't return anything and doesn't raise exceptions >> would be a clear signal that send() just blocks until it's possible to send >> data to me, and I'm sure to many others, as well. There is no reason to >> silently drop data sent over a TCP connection - after all, we could as well >> base the protocol on UDP if we did, and lose nothing. >> > > There's another option besides blocking, raising an exception, and dropping > data: unlimited buffering in user space. So I'm saying we should not put any > limits on the amount of user-space buffering we're willing to do, any more > than we put any limits on the amount of other types of user-space memory > allocation a page can perform. > I agree with Alexey that applications need feedback when they're consistentiently exceeding what your net connection can handle. I think an application getting an exception rather than filling up its buffer until it OOMs is a much better experience for the user and the web developer. If you have application level ACKs (which you probably should--especially in high-throughput uses), you really shouldn't even hit the buffer limits that a UA might have in place. I don't really think that having a limit on the buffer size is a problem and that, if anything, it'll promote better application level flow control. J -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090727/e9d2bceb/attachment-0001.htm>
Received on Monday, 27 July 2009 14:02:28 UTC