Re: setting bandwidth

On Sun, Jul 28, 2013 at 1:43 PM, cowwoc <cowwoc@bbs.darktech.org> wrote:

>     Look, when I try to upload a very large file over TCP I see the
> upload speed peg at close to 100% of capacity seemingly
> immediately. Can't we take the same mechanism as TCP and layer it on
> top of what WebRTC uses today? I'd like to avoid drilling down into
> the specific implementation at this time. All, I'm asking is whether
> this is technically doable and if not, why
>
> Let's not lose track of the goal of this discussion, which is to
> enable users to specify the initial/minimum bandwidth usage so chat
> sessions don't start out with blurry/choppy video at 1080p. If you
> have an alternative for achieving this, I'm all ears.

As Martin says, this isn't really the place for a detailed discussion
of congestion management. There's an entire IETF WG for thinking about
congestion control for media traffic (http://tools.ietf.org/wg/rmcat/)
so this should be giving you the sense that (1) it's not trivial and
(2) the IETF is working on it. So, for both reasons, it's a lot more
complicated than "just" having the application specify the initial
bandwidth.

Grossly oversimplifying, your average rate control mechanism (like the
one in TCP) is designed to:

1. Estimate how much bandwidth is available and adjust the sending
   rate accordingly.
2. Share the network well with other attempted uses (both by other
   applications running on the user's computer) and by applications
   running on other user's computers rather than just consuming all
   the available bandwidth.

A key part of fulfilling both of these objectives is to start sending
relatively slow and then ramp up the sending rate as you gain
confidence that there is bandwidth headroom, which is of course very
approximately what TCP does. If you do the opposite and start sending
fast expecting to throttle back when you're wrong, then you can have
some very bad effects on the network (as Harald suggests, look up
"congestion collapse"). You also are actually not that likely to have
a great experience yourself, since if you try to transmit past what
the network than you are going to get packet loss, which will likely
make the user experience *worse* than if you had simply started off
conservatively in the first place. So, for both reasons, it's really
not a good idea to start out sending aggressively, especially when it
really doesn't have any idea what the network conditions are, as is
doubly the case with peer-to-peer calls.

As you observe, TCP does converge relatively quickly on the
appropriate bandwidth, and it's true that you can--and voice and video
implementations to some extent do--use similar algorithms to detect
that they could safely increase their sending rate. However, because
of the particular properties of voice and video, sending rate
adjustments tend to be rather more difficult and jarring than just
sending the packets faster the way you do with a TCP file
transfer, and so it takes a while for the system to react
which is why you see a longer duration of low-quality video
before the upgrade to high quality.

I hasten to restate at this point that I am grossly oversimplifying
the situation and I've only bothered to go into this level of detail
to give you a flavor of the complexity of the problem and why
just allowing the application to specify its initial sending rate is
profoundly inadequate. If you're interested in making a useful
contribution to this problem I suggest doing a fair bit of
background reading on congestion control and then joining
the discussion in RMCAT.

This isn't to say, btw, that it's not useful to have the platform
(i.e., the browser) give the application feedback about the congestion
environment it's operating in. The application could (for instance)
use that feedback to start up or shut down streams as appropriate.
It's just that those changes by the application still need to fit
within the congestion envelope.

-Ekr















>  Thanks,
> Gili
>
>
> On 28/07/2013 3:58 PM, Eric Rescorla wrote:
>
> Websockets runs over TCP and so inherits TCP congestion control
>
>  Ekr
>
> On Jul 28, 2013, at 21:52, cowwoc <cowwoc@bbs.darktech.org> wrote:
>
>
>     How do WebSockets deal with this problem? Do they even try to?
>
> Gili
>
> On 28/07/2013 1:10 PM, Martin Thomson wrote:
>
> This really isn't the place for lessons in congestion management on the
> internet. Maybe you can start out by searching for "congestion collapse".
> Get back to us when you can explain why TCP works like it does.
> On Jul 28, 2013 5:06 PM, "cowwoc" <cowwoc@bbs.darktech.org> wrote:
>
>>  On 28/07/2013 3:30 AM, Martin Thomson wrote:
>>
>> On 27 July 2013 09:40, cowwoc <cowwoc@bbs.darktech.org> <cowwoc@bbs.darktech.org> wrote:
>>
>>  I expect an immediate sharp video experience.
>>
>>  I suspect that no matter what we do, you will be disappointed.  The
>> thing is, what you describe is likely to generate congestion and there
>> is no way that a browser platform should permit an application to do
>> that.
>>
>>
>>     I don't understand the congestion argument, so please help me
>> understand.
>>
>>     What will happen if we start at 3MBit, versus slowly increasing
>> bandwidth usage up to 3Mbit in the following cases?
>>
>>    1. The pipe is a synchronous 2MBit line
>>    2. The pipe is a synchronous 4MBit line
>>
>>     For case #1, if the initial fence is minBandwidth = 3MBit, I expect
>> the callback to get invoked right away and it either aborting the
>> application or reducing the video resolution and minimum bandwidth. In the
>> case of a gradual ramp-up, I expect the same end-result (callback getting
>> invoked) but it will take longer to occur and will take place at the 2MBit
>> mark.
>>     For case #2, I expect both scenarios (immediate vs ramp-up) to be
>> identical.
>>
>>     Did I miss anything?
>>
>> Thanks,
>> Gili
>>
>
>
>

Received on Monday, 29 July 2013 02:47:12 UTC