[whatwg] WebSocket: Events instead of polling bufferedAmount?


I would much prefer that WebSocket fired an event as data is sent,
instead of having applications poll bufferedAmount.

An application may want to be aware of how much data is unsent to be
able to lazily generate more data only when it is possible to send the
data immediately. This reduces the "memory commitment" of the client.
With connection methods like Flash's Socket, this data is completely
unavailable, so you must wait for the peer to tell you what you sent.
It's nice that WebSocket makes the information available at all.

But, being forced to poll bufferedAmount is suboptimal:
setTimeout(..., 0) may take 16ms or more to fire, so without hacks,
applications are limited to making decisions (to generate more data or
not) at ~16ms intervals. This is a problem if clients are on local
networks, where the round trip time is 1ms. I know it is possible
to poll faster, but I don't think it's a good solution.

The use cases for client->server high-volume streaming are not hard to
imagine: a client may want to upload a large amount of text stored
locally without loading more into memory than necessary. Or, a client
may be running a distributed computing application that is limited by
upstream bandwidth.


Received on Thursday, 25 March 2010 14:51:35 UTC