W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > January 2019

Re: [webrtc-pc] (How) does SCTP handle CPU-bound congestion on JavaScript thread? (#2086)

From: Lennart Grahl via GitHub <sysbot+gh@w3.org>
Date: Mon, 28 Jan 2019 21:04:53 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-458300681-1548709492-sysbot+gh@w3.org>
> Could you then comment on mine?

I'll go back to the original statement:

> [...] when someone activates ndata and try to receive 10 4GB files with current API (common torrent case), it will not work either.

When calling `.send`, you're also restricted by the remote peer's announced maximum message size. No one announces 4 GiB at the moment. That was the direction my thoughts went.

But let's ignore `.send` because you wanted to talk about the receiving side. Assume A uses Firefox (which supports receiving up to 2 GiB sized messages) and B is a server that has a streaming API. Let's also assume they have activated ndata. Then that server could effectively start sending 2 GiB sized messages on every channel with the same priority. That would overburden A and bring it into a OOM situation.

Okay, granted. :ok_hand: Someone may do that at some point. But it's a considerable more unlikely scenario compared to receiving a message larger than the receive buffer - with and without ndata. That already happens when you send something > 1 MiB to Firefox. In Firefox < 64 you already surpass the receive buffer with just slightly more than 128 KiB. I don't believe the likelihood of these two occurrences are in the same league. And if one adjusts the receive buffer of Firefox to 2 GiB, it'd run into the same issue again once ndata is being deployed. That would be a spec conflict anyway since ndata is mandatory.


So far we've established three possibilities:

1. Don't clear the receive buffer before having handed out a message. Requires `max-message-size` to be locked to the receive buffer's size, otherwise instant deadlock if the receive buffer is smaller than the message received. Risk spontaneous deadlocks if ndata (which is mandatory) is activated.
2. Drop decrypted DTLS packets intended for the SCTP stack when the event loop is congested. Risk retransmissions leading to higher bandwidth usage and potential media quality degradation.
3. Do nothing and declare the current API unsuitable. Risk the receiving peer to go OOM (or let the browser kill the channel if it reaches an absurd amount of memory usage).

Pick your poison. These are my comments for each possibility:

1. No (said enough about why already).
2. Can't estimate the risk here but I don't expect the event queue to be congested that often, so the impact may be acceptable. However, that is only my guess. I don't know how often the event queue would be congested. And it would require a definition of when the event queue is considered *congested*.
3. From my own experience, I haven't heard complains about this issue so far. I'm also heavily using data channels for various projects and haven't experienced it to be an issue in practice. Perhaps because usrsctp is usually much earlier CPU bound than the event queue and the JS thread. User applications can also throttle themselves. Sure, it's not great but maybe this problem is slightly overrated? I'd vote for this.

GitHub Notification of comment by lgrahl
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/issues/2086#issuecomment-458300681 using your GitHub account
Received on Monday, 28 January 2019 21:04:55 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:22:10 UTC