W3C home > Mailing lists > Public > whatwg@whatwg.org > October 2013

Re: [whatwg] Flow Control on Websockets

From: Michael Meier <mm@sigsegv.ch>
Date: Fri, 18 Oct 2013 09:21:59 +0200
Message-ID: <5260E197.2060802@sigsegv.ch>
To: Takeshi Yoshino <tyoshino@google.com>
Cc: whatwg@lists.whatwg.org

Nicholas Wilson:
> So, you always need an application-level windowing setup for
> interactive flows. Just sending until the socket blocks will cause a
> backlog to build up.

I'm aware that backpressure only works via building up a backlog. Maybe 
I'm asking too much when I try to piggy back my application's flow 
control on TCP's flow control. But then again, TCP was tuned for decades 
by experts, whereas my own flow control will hardly perform well.
I was under the impression that TCP buffers on the sender and receiver 
were in the order of dozens and hundreds of kilobytes, not megabytes. If 
they are in the megabyte range, it clearly makes them unusable for a 
number of realtime-y* tasks.

Nicholas Wilson
> Implementing some flow control messages is not a bad thing at all. TCP
> is there to prevent traffic disaster, not to guarantee success.

What kind of flow control do you implement in your applications? Do you 
have some kind of building blocks (not so say: library :)) with which 
you construct your flow control? I find that flow control is pretty much 
all or nothing. Either you take TCP and it's guarantees or costs or your 
completely left to your own devices.

I'm not yet convinced as to the usefulness and practicability of doing 
your own flow control on top of TCP. All your flow control messages go 
through all the inpredictable queues and inpredictable delays that all 
your TCP transported data goes through. How do you handle this (possibly 
very jumpy) variability in bandwidth and especially latency? In other 
words, doesn't implementing flow control require rather timely and 
direct control of what gets sent? This is something that you can only 
achieve _in front_ of queues, not going through queues.


Nicholas Wilson:
> Your second question is whether it's possible to stop the browser
> reading from the socket. Yes, just don't return from your onmessage
> handler until you've actually finished handling the message.

Takeshi Yoshino:
> If such blocking work is done in a worker, this method should work.
> (FYI, Chrome (Blink) doesn't have such flow control between network
> thread and worker thread yet, so this shouldn't work on Chrome for
> now).

> If an app is designed to be responsive and such work is done by some
> other asynchronous HTML5 API, this method doesn't work.

I agree with this. Using the method proposed by Nicholas Wilson, my app 
can only be responsive to one WS, not to another and not to anything 
else. It seems clear why this is suboptimal to unacceptable.
I can also confirm the nonexistence of flow control in Chrome between 
network and worker thread. When I pause my JS script in the debugger, a 
thread keeps reading data from the underlying socket, no backlog happens 
and all the received data is buffered in memory.

As for chrome, it would surely be nice if the network thread stopped 
reading data when none is consumed in the onmessage handler.

As for the WS API, I still don't understand why there is such an 
asymmetry between the send and receive sides?


Cheers,
Michael

* I'm using the word "realtime" quite liberally here, especially for 
someone working on embedded realtime systems ;)
Received on Friday, 18 October 2013 07:22:29 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 17:00:12 UTC