W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > January 2019

Re: [webrtc-pc] (How) does SCTP handle CPU-bound congestion on JavaScript thread? (#2086)

From: Lennart Grahl via GitHub <sysbot+gh@w3.org>
Date: Sat, 26 Jan 2019 12:33:13 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-457827612-1548505991-sysbot+gh@w3.org>
The issue is that there are N potential streams sharing 1 single receive buffer. Let the receive buffer/window *W* be 1 MiB. Let there be streams *A* and *B* where *B* is, for some reason, not being read from. The sender sends 1 MiB to stream *B* and then to stream *A*. Also, terminology here is that it's a *stream* on SCTP level and a *channel* on JS level.

With the current API, this would happen:

1. Chunks of size S for stream B would be read from W into a separate reassembly buffer allocated by the browser for channel B. This frees the amount of data from W.
2. All chunks of the message have been received, the message is being reassembled and bubbled via `onmessage` on channel B.
3. No one listens on B but it doesn't matter.
4. The same happens for stream/channel A.

With the proposed change, this would happen:

1. Chunks of size S for stream B would be received and
  a) either copied from W into a separate reassembly buffer allocated by the browser for channel B, or
  b) just left in W until the message is complete.
  In both cases this does not free the amount of data from W.
2. All chunks of the message have been received, the message is being reassembled and bubbled via `onmessage` on channel B.
3. No one listens on B, thus W is now clogged as it only has a capacity of 1 MiB.
3. The message on stream/channel A cannot be received.

---

Same example, but now B is being read from and we send 2 MiB sized messages instead of just 1 MiB.

With the current API, nothing changes compared to the above example.

With the proposed change, this would happen:

1. Chunks of size S for stream B would be received and
  a) either copied from W into a separate reassembly buffer allocated by the browser for channel B, or
  b) just left in W until the message is complete.
  In both cases this does not free the amount of data from W.
2. This continues until W has reached its capacity (1 MiB). But the message is of size 2 MiB, so W is clogged and the association stalls.

---

@henbos I'm trying to make a point that *one receive buffer per association* is a problem for 1). I haven't really talked about the sender side 2) but IIRC ndata does indeed solve the priorisation issue (by the use of stream schedulers), so I don't think we need to file an issue for that. It may have gotten a little confusing with all the discussion if it would make a difference with a different API, ndata & co.

-- 
GitHub Notification of comment by lgrahl
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/issues/2086#issuecomment-457827612 using your GitHub account
Received on Saturday, 26 January 2019 12:33:14 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 9 October 2019 15:15:01 UTC