- From: Drew Wilson <atwilson@google.com>
- Date: Mon, 27 Jul 2009 14:25:10 -0700
On Mon, Jul 27, 2009 at 2:02 PM, Jeremy Orlow <jorlow at chromium.org> wrote: > On Mon, Jul 27, 2009 at 1:44 PM, Drew Wilson <atwilson at google.com> wrote: > >> >> >> On Mon, Jul 27, 2009 at 1:36 PM, Alexey Proskuryakov <ap at webkit.org>wrote: >> >>> >>> 27.07.2009, ? 13:20, Jeremy Orlow ???????(?): >>> >>> I agree that this will help if the application sends data in burst mode, >>>> but what if it just constantly sends more than the network can transmit? It >>>> will never learn that it's misbehaving, and will just take more and more >>>> memory. >>>> >>>> An example where adapting to network bandwidth is needed is of course >>>> file uploading, but even if we dismiss it as a special case that can be >>>> served with custom code, there's also e.g. captured video or audio that can >>>> be downgraded in quality for slow connections. >>>> >>>> Maybe the right behavior is to buffer in user-space (like Maciej >>>> explained) up until a limit (left up to the UA) and then anything beyond >>>> that results in an exception. This seems like it'd handle bursty >>>> communication and would keep the failure model simple. >>>> >>> >>> >>> This sounds like the best approach to me. >>> >>> >>> 27.07.2009, ? 13:27, Drew Wilson ???????(?): >>> >>> I would suggest that the solution to this situation is an appropriate >>>> application-level protocol (i.e. acks) to allow the application to have no >>>> more than (say) 1MB of data outstanding. >>>> >>>> I'm just afraid that we're burdening the API to handle degenerative >>>> cases that the vast majority of users won't encounter. Specifying in the API >>>> that any arbitrary send() invocation could throw some kind of "retry >>>> exception" or return some kind of error code is really really cumbersome. >>>> >>> >>> Having a send() that doesn't return anything and doesn't raise exceptions >>> would be a clear signal that send() just blocks until it's possible to send >>> data to me, and I'm sure to many others, as well. There is no reason to >>> silently drop data sent over a TCP connection - after all, we could as well >>> base the protocol on UDP if we did, and lose nothing. >>> >> >> There's another option besides blocking, raising an exception, and >> dropping data: unlimited buffering in user space. So I'm saying we should >> not put any limits on the amount of user-space buffering we're willing to >> do, any more than we put any limits on the amount of other types of >> user-space memory allocation a page can perform. >> > > I agree with Alexey that applications need feedback when they're > consistentiently exceeding what your net connection can handle. I think an > application getting an exception rather than filling up its buffer until it > OOMs is a much better experience for the user and the web developer. > I'm assuming that no actual limits would be specified in the specification, so it would be entirely up to a given UserAgent to decide how much buffering it is willing to provide. Doesn't that imply that a well-behaved web application would be forced to check for exceptions from all send() invocations, since there's no way to know a priori whether limits imposed by an application via its app-level protocol would be sufficient to stay under a given user-agent's internal limits? Even worse, to be broadly deployable the app-level protocol would have to enforce the lowest-common-denominator buffering limit, which would inhibit throughput on platforms that support higher buffers. In practice, I suspect most implementations would adopt a "just blast out as much data as possible until the system throws an exception, then set a timer to retry the send in 100ms" approach. But perhaps that's your intention? If so, then I'd suggest changing the API to just have a "canWrite" notification like other async socket APIs provide (or something similar) to avoid the clunky catch-and-retry idiom. Personally, I think that's overkill for the vast majority of use cases which would be more than happy with a simple send(), and I'm not sure why we're obsessing over limiting memory usage in this case when we allow pages to use arbitrary amounts of memory elsewhere. > > If you have application level ACKs (which you probably should--especially > in high-throughput uses), you really shouldn't even hit the buffer limits > that a UA might have in place. I don't really think that having a limit on > the buffer size is a problem and that, if anything, it'll promote better > application level flow control. > > J > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090727/53d48de5/attachment.htm>
Received on Monday, 27 July 2009 14:25:10 UTC