- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Mon, 1 Dec 2014 06:43:34 -1000
- To: "public-webrtc@w3.org" <public-webrtc@w3.org>
https://github.com/w3c/webrtc-pc/issues/39 >From the issue: In #29, @stefhak noted that we don't really specify how we handle inbound media that isn't yet assigned to a MediaStreamTrack through signaling. This can happen with renegotiation+bundling at the offerer. We don't specify whether to keep the data, or what to do with it. I tend to think that this falls into the space where you have resource and quality trade-offs that aren't really very safe to specify for the general case. A machine with lots of RAM can store several jitter buffers worth of audio and several I-frames, but there are limits to how much they can store. There are also limits too on processing capacity to handle all that media properly. In the same way we don't specify jitter buffer size and depth, I'm going to recommend closing this issue without at most some editorial additions.
Received on Monday, 1 December 2014 16:44:00 UTC