W3C home > Mailing lists > Public > public-webrtc@w3.org > January 2014

Re: What is missing for building "real" services?

From: Alexandre GOUAILLARD <agouaillard@gmail.com>
Date: Wed, 8 Jan 2014 09:50:08 +0800
Message-ID: <CAHgZEq56V1nSVzUFTd+gxT00e8CwizfjH6uFBRz4jLvJbf8-2A@mail.gmail.com>
To: Eric Rescorla <ekr@rtfm.com>
Cc: "piranna@gmail.com" <piranna@gmail.com>, public-webrtc <public-webrtc@w3.org>
here are a few proposition on things that are really biting us, and how to
(perhaps) make it easier:

- bandwidth control
1. It seems that the number one sdp munging cause is the now infamous B=AS:
line to put a cap on bandwidth. Since that capacity exists in the
underlying code, it would be great to have an API that can help us put
caps, either on each stream, and/or on the full call.
2. I also see that there is a "auto-mute" feature being implemented that
depend on an arbitrary threshold. It might be interested (but overkill?),
to give user the capacity to set that limit (currently 50k I guess) somehow.
3. Additionally, and perhaps not unrelated, we would alike to be able to
decide what happen when bandwidth goes down. Right now it feels like the
video has the priority over the audio. We would like to be able to
explicitly set the audio priority higher than the video in the underlying
system, as opposed to implement a stats listener, which triggers
re-negotiation (with the corresponding O/A delay) when bandwidth goes below
a certain threshold.

- call controls like mute / hold
Right now, you can mute a local stream, but it does not seem to be possible
to let the remote peers know about the stream being muted. We ended up
implementing a specific off band message for that, but we believe that the
stream/track could carry this information. This is more important for video
than audio, as a muted video stream is displayed as a black square, while a
muted audio as no audible consequence. We believe that this mute / hold
scenario will be frequent enough, that we should have a standardized way of
doing it, or interop will be very difficult.

- screen/application sharing
We are aware of the security implications, but there is a very very strong
demand for screen sharing. Beyond screen sharing, the capacity to share the
displayed content of a given window of the desktop would due even better.
Most of the time, users only want to display one document, and that would
also reduce the security risk by not showing system trays. Collaboration
(the ability to let the remote peer edit the document) would be even
better, but we believe it to be outside of the scope of webRTC.

- NAT / Firewall penetration feedback - ICE process feedback
Connectivity is a super super pain to debug, and the number one cause of
concern.
1. The 30s time out on chrome generated candidate is biting a lot of
people. The time out is fine, but there should be an error message that
surfaces (see 5)
2. Turn server authentication failure does not generate an error, and
should (see 5)
3. ICE state can stay stuck in "checking" forever even after all the
candidate have been exhausted
4. Not all ICE states stated in the spec are implemented (completed? fail?)
5. It would due fantastic to be able to access the list of candidates, with
their corresponding status (not checked, in use, failed, .) with the cause
for failure
6. In case of success, it would be great to know which candidate is being
used (google does that with the googActive thingy) but also what is the
type of the candidate. Right now, on client side, at best you have to go to
chrome://webrtc-internals, get the active candidate, and look it up from
the list of candidates. When you use a TURN server as a STUN server too,
then the look up is not an isomorphism.

right now, the only way to understand what's going on is to have a
"weaponized" version of chrome, or a native app, that gives you access to
the ICE stack, but we can not expect clients to deploy this, nor to
automate it. Surfacing those in an API would allow one to:
- adapt the connection strategy on the fly in an iterative fashion on
client side.
- report automatically the problems and allow remote debug of failed calls,



On Tue, Jan 7, 2014 at 2:15 AM, Eric Rescorla <ekr@rtfm.com> wrote:

> On Mon, Jan 6, 2014 at 10:10 AM, piranna@gmail.com <piranna@gmail.com>
> wrote:
> >> That's not really going to work unless you basically are on a public
> >> IP address with no firewall. The issue here isn't the properties of
> >> PeerConnection but the basic way in which NAT traversal algorithms
> >> work.
> >>
> > I know that the "IP and port" think would work due to NAT, but nothing
> > prevent to just only need to exchange one endpoint connection data
> > instead of both...
>
> I don't know what you are trying to say here.
>
> A large fraction of NATs use address/port dependent filtering which
> means that there needs to be an outgoing packet from each endpoint
> through their NAT to the other side's server reflexive IP in order to
> open the pinhole. And that means that each side needs to provide
> their address information over the signaling channel.
>
> I strongly recommend that you go read the ICE specification and
> understand the algorithms it describes. That should make clear
> why the communications patterns in WebRTC are the way they
> are.
>
> -Ekr
>
>
Received on Wednesday, 8 January 2014 01:50:36 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:37 UTC