W3C home > Mailing lists > Public > public-webrtc@w3.org > July 2013

Re: setting bandwidth

From: cowwoc <cowwoc@bbs.darktech.org>
Date: Sat, 27 Jul 2013 12:40:51 -0400
Message-ID: <51F3F813.5010904@bbs.darktech.org>
To: Martin Thomson <martin.thomson@gmail.com>
CC: Kiran Kumar <g.kiranreddy4u@gmail.com>, Silvia Pfeiffer <silviapfeiffer1@gmail.com>, public-webrtc <public-webrtc@w3.org>
On 27/07/2013 11:58 AM, Martin Thomson wrote:
> On 25 July 2013 21:24, Kiran Kumar <g.kiranreddy4u@gmail.com> wrote:
>> The timer implementation I suggested, is to avoid the error condition that
>> can arise as a result of Gili's proposal callback API.
> Unfortunately, having a specific timer-based constraint will cause
> problems.  Different browsers are likely to implement very different
> rate control mechanisms, as we have already seen with the video
> comparison tests (in RTCWEB).  The time periods over which bandwidth
> is controlled and how rate control is implemented are intimately tied.
>   What this sort of mechanism is likely to do is force applications to
> learn about the intricate detail of each browser (probably using user
> agent sniffing techniques) in order to control the bandwidth consumed.
> In some respects, Cullen's proposal is elegant, but I think that we
> really need a mechanism that effectively just sets the b= line in SDP.
>   Note that this doesn't necessary have to be negotiated as long as the
> number that is set is lower than the negotiated value.  However, for
> consistency purposes, I'd probably just make this a constraint on
> createOffer and maybe createAnswer, which would force the use of
> setLocalDescription etc...
> Also, to anyone considering a minimum value for a bandwidth
> constraint.  There is no way that this would ever be a good idea.  The
> browser will have to respect the congestion signals it receives and
> reduce bandwidth accordingly.  Allowing an application to force a
> minimum rate would allow bad actors to generate congestion on the
> Internet.  Besides, good codecs encode still images and silence at
> near zero rates during normal operation, we wouldn't want to prevent
> that.  Minimum target quality levels are something else, of course.


This is precisely why I believe users should be able to specify "fences" 
and callbacks. There are a lot of cases where the application knows 
better than the browser what should happen when a fence is crossed.

Here is my use-case for minimum bandwidth:

  * I'm running an application that requires 1080p video. A lower video
    resolution is unacceptable. It is my job to guarantee the existence
    of a 3MBit synchronous pipe. If, for whatever reason something goes
    wrong, I expect to be able to handle the failure condition (i.e. the
    application should decide, not the browser).
  * Each fence condition (congestion, mute, etc) would get handled
    differently by the application. In the case of congestion, I might
    kill the video feed altogether or display an error message and
    terminate the call. In the case of mute, the application could avoid
    a fence condition by enabling video mute and changing the minimum
    bandwidth to zero at the same time. When un-muting, it would restore
    the minimum bandwidth fence.
  * Practically speaking: the current behavior (assuming minimum
    bandwidth is zero) results in very poor video quality during the
    first minute the video session begins. Applications that use 1080p
    need to wait a minute for the session to "warm up" before they can
    conduct any meaningful dialog. Even if this time were reduced to 10
    seconds it would be unacceptable. I expect an immediate sharp video

Received on Saturday, 27 July 2013 16:41:43 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:17:50 UTC