RE: setting bandwidth

I was just catching up on this thread, and I found it hard to follow because I think a few different things are being mixed together.
1) negotiation of bandwidth
This is the result of some API/signaling interaction, not the real-time performance of the network. Failure to negotiate some minimum required bandwidth could result in an application wanting to consider the negotiation as failed (e.g. failure to add a media stream)
2) current bandwidth utilization
This is something that seems valuable to be able to query through the stats API.
3) packet loss/delay
Packet loss or delay beyond some thresholds is what is going to result in poor video quality. This is something for which a callback to the app seems appropriate.
With video, the bandwidth is high when there is a scene change or lots of motion. The bandwidth may drop considerably when the image is fairly static. Having a callback when the bandwidth falls below some minimum does not seem very useful. What is more useful is a callback when specific packet loss or delay thresholds are crossed.

Cheers,
Charles

From: Harald Alvestrand [mailto:harald@alvestrand.no]
Sent: Monday, July 29, 2013 6:16 AM
To: public-webrtc@w3.org
Subject: Re: setting bandwidth

On 07/29/2013 12:58 AM, cowwoc wrote:
On 28/07/2013 5:06 PM, Harald Alvestrand wrote:
On 07/28/2013 10:43 PM, cowwoc wrote:

    Look, when I try to upload a very large file over TCP I see the upload speed peg at close to 100% of capacity seemingly immediately. Can't we take the same mechanism as TCP and layer it on top of what WebRTC uses today? I'd like to avoid drilling down into the specific implementation at this time. All, I'm asking is whether this is technically doable and if not, why.

    Let's not lose track of the goal of this discussion, which is to enable users to specify the initial/minimum bandwidth usage so chat sessions don't start out with blurry/choppy video at 1080p. If you have an alternative for achieving this, I'm all ears.

That may be YOUR goal in this discussion. It is certainly not a formulation I'll sign up for as a WG goal.

In the IETF, the first goal is the continued survival of the Internet; all other desires are subordinate to that.

We know from bitter experience (google "congestion collapse") that badly designed congestion control, when deployed simultaneously by a large fraction of the nodes on the Internet, brings problems that can render the Net unusable, and which - mark this - do NOT show up in tests done with only a few users.
The IETF congestion people are conservative, and have good reason to be.

I think we can agree that there is a clear desire to start up chat sessions with video that is pleasing to the user.

I don't think we have any kind of consensus that letting the browser obey any suggestion for initial bandwidth that comes from the user, no matter what it knows (or knows it doesn't know) about the network conditions between the sender and the recipient, is a reasonable way to achieve that.

Harald

Harald,

     The only reason I brought up the example of HTTP upload or WebSockets is to ask what we can learn anything from how other technologies handle this problem. I'm only interested in the "what" (instant-on clear/smooth video chat sessions) not the "how". To that end, do you have any ideas?

My best idea at the moment is to start the video without showing it, waiting for a few tenths of a second to let the congestion manager ramp up, and then show it.

That depends on having a bandwidth manager that can ramp up in a few RTTs rather than multiple seconds, of course, but the relevant WG is the IETF RMCAT WG rather than this one.

This group needs to focus on how it exposes controls for things that can live within the constraints set by the environment it has to live in.

Received on Monday, 29 July 2013 09:03:06 UTC