Re: What is missing for building "real" services?

@ piranna,

I will have to respectfully disagree here. The security implications
of screensharing are very different from other webrtc features, and
thus should treated with additional care. I would agree with you that
a flag might be unnecessary, but a prompt, IMHO is.

On Thu, Jan 9, 2014 at 9:53 AM, piranna@gmail.com <piranna@gmail.com> wrote:
> I'm not ccomparing both in the way I accept whatever of both, but instead in
> the way both (plugins and flags) are equally bad ideas. Screen and
> application sharing should be included and enabled on browsers by default,
> and not hidden behind a flag or whatever other method.
>
> Send from my Samsung Galaxy Note II
>
> El 09/01/2014 02:42, "Alex Gouaillard" <alex.gouaillard@temasys.com.sg>
> escribió:
>
>> @ piranha.
>>
>> while I agree with you for social users and most of the population out
>> there, the difference between clicking a flag and installing a plugin
>> is the process required by IT teams to accept the product and deploy
>> it in an enterprise environment. Everything needs to validated
>> beforehand, including (especially?) plugins. They have a very long
>> list of products to screen and maintain, and are very reluctant to add
>> yet another one. Moreover, google's chrome start with a higher
>> credibility than any small or medium sized company's plugin.
>>
>> On Thu, Jan 9, 2014 at 8:54 AM, Silvia Pfeiffer
>> <silviapfeiffer1@gmail.com> wrote:
>> > On Thu, Jan 9, 2014 at 10:10 AM, Randell Jesup <randell-ietf@jesup.org>
>> > wrote:
>> >> On 1/7/2014 8:50 PM, Alexandre GOUAILLARD wrote:
>> >>
>> >> here are a few proposition on things that are really biting us, and how
>> >> to
>> >> (perhaps) make it easier:
>> >>
>> >> - bandwidth control
>> >> 1. It seems that the number one sdp munging cause is the now infamous
>> >> B=AS:
>> >> line to put a cap on bandwidth. Since that capacity exists in the
>> >> underlying
>> >> code, it would be great to have an API that can help us put caps,
>> >> either on
>> >> each stream, and/or on the full call.
>> >>
>> >>
>> >> yes.
>> >>
>> >>
>> >> 2. I also see that there is a "auto-mute" feature being implemented
>> >> that
>> >> depend on an arbitrary threshold. It might be interested (but
>> >> overkill?), to
>> >> give user the capacity to set that limit (currently 50k I guess)
>> >> somehow.
>> >>
>> >>
>> >> Pointer to this auto-mute implemetation?
>> >>
>> >>
>> >> 3. Additionally, and perhaps not unrelated, we would alike to be able
>> >> to
>> >> decide what happen when bandwidth goes down. Right now it feels like
>> >> the
>> >> video has the priority over the audio. We would like to be able to
>> >> explicitly set the audio priority higher than the video in the
>> >> underlying
>> >> system, as opposed to implement a stats listener, which triggers
>> >> re-negotiation (with the corresponding O/A delay) when bandwidth goes
>> >> below
>> >> a certain threshold.
>> >>
>> >>
>> >> Right now they have the same "priority", but really audio is typically
>> >> fixed, so the video reacts to changes in the apparent level of
>> >> delay/buffering.  What you may be seeing is better (or less-obvious)
>> >> error
>> >> control and recovery in the video; the eye is often less sensitive to
>> >> things
>> >> like dropped frames than the ear.
>> >>
>> >> I'd love to see a trace/packet-capture/screen-scrape-recording where
>> >> you see
>> >> that apparent behavior.
>> >>
>> >>
>> >>
>> >> - call controls like mute / hold
>> >> Right now, you can mute a local stream, but it does not seem to be
>> >> possible
>> >> to let the remote peers know about the stream being muted. We ended up
>> >> implementing a specific off band message for that, but we believe that
>> >> the
>> >> stream/track could carry this information. This is more important for
>> >> video
>> >> than audio, as a muted video stream is displayed as a black square,
>> >> while a
>> >> muted audio as no audible consequence. We believe that this mute / hold
>> >> scenario will be frequent enough, that we should have a standardized
>> >> way of
>> >> doing it, or interop will be very difficult.
>> >>
>> >>
>> >> There is no underlying standard in IETF for communicating this; it's
>> >> typically at the application level.  And while we don't have good ways
>> >> in
>> >> MediaStream to do this yet, I strongly prefer to send an fixed image
>> >> when
>> >> video-muted/holding.  Black is a bad choice....
>> >
>> > It would be nice if browsers sent an image, such as "video on hold" -
>> > just like they provide default 404 page renderings. This is a quality
>> > of implementation issue then. Maybe worth registering a bug on
>> > browsers. But also might be worth a note in the spec.
>> >
>> >
>> >> - screen/application sharing
>> >> We are aware of the security implications, but there is a very very
>> >> strong
>> >> demand for screen sharing. Beyond screen sharing, the capacity to share
>> >> the
>> >> displayed content of a given window of the desktop would due even
>> >> better.
>> >> Most of the time, users only want to display one document, and that
>> >> would
>> >> also reduce the security risk by not showing system trays.
>> >> Collaboration
>> >> (the ability to let the remote peer edit the document) would be even
>> >> better,
>> >> but we believe it to be outside of the scope of webRTC.
>> >>
>> >>
>> >> yes, and dramatically more risky.  Screen-sharing and how to preserve
>> >> privacy and security is a huge problem.  Right now the temporary kludge
>> >> is
>> >> to have the user whitelist services that can request it (via extensions
>> >> typically)
>> >
>> > Yeah, I'm really unhappy about the screen sharing state of affairs,
>> > too. I would much prefer it became a standard browser feature.
>> >
>> > Cheers,
>> > Silvia.
>> >
>> >>    Randell
>> >>
>> >>
>> >>
>> >> - NAT / Firewall penetration feedback - ICE process feedback
>> >> Connectivity is a super super pain to debug, and the number one cause
>> >> of
>> >> concern.
>> >> 1. The 30s time out on chrome generated candidate is biting a lot of
>> >> people.
>> >> The time out is fine, but there should be an error message that
>> >> surfaces
>> >> (see 5)
>> >> 2. Turn server authentication failure does not generate an error, and
>> >> should
>> >> (see 5)
>> >> 3. ICE state can stay stuck in "checking" forever even after all the
>> >> candidate have been exhausted
>> >> 4. Not all ICE states stated in the spec are implemented (completed?
>> >> fail?)
>> >> 5. It would due fantastic to be able to access the list of candidates,
>> >> with
>> >> their corresponding status (not checked, in use, failed, ….) with the
>> >> cause
>> >> for failure
>> >> 6. In case of success, it would be great to know which candidate is
>> >> being
>> >> used (google does that with the googActive thingy) but also what is the
>> >> type
>> >> of the candidate. Right now, on client side, at best you have to go to
>> >> chrome://webrtc-internals, get the active candidate, and look it up
>> >> from the
>> >> list of candidates. When you use a TURN server as a STUN server too,
>> >> then
>> >> the look up is not an isomorphism.
>> >>
>> >> right now, the only way to understand what's going on is to have a
>> >> "weaponized" version of chrome, or a native app, that gives you access
>> >> to
>> >> the ICE stack, but we can not expect clients to deploy this, nor to
>> >> automate
>> >> it. Surfacing those in an API would allow one to:
>> >> - adapt the connection strategy on the fly in an iterative fashion on
>> >> client
>> >> side.
>> >> - report automatically the problems and allow remote debug of failed
>> >> calls,
>> >>
>> >>
>> >>
>> >> On Tue, Jan 7, 2014 at 2:15 AM, Eric Rescorla <ekr@rtfm.com> wrote:
>> >>>
>> >>> On Mon, Jan 6, 2014 at 10:10 AM, piranna@gmail.com <piranna@gmail.com>
>> >>> wrote:
>> >>> >> That's not really going to work unless you basically are on a
>> >>> >> public
>> >>> >> IP address with no firewall. The issue here isn't the properties of
>> >>> >> PeerConnection but the basic way in which NAT traversal algorithms
>> >>> >> work.
>> >>> >>
>> >>> > I know that the "IP and port" think would work due to NAT, but
>> >>> > nothing
>> >>> > prevent to just only need to exchange one endpoint connection data
>> >>> > instead of both...
>> >>>
>> >>> I don't know what you are trying to say here.
>> >>>
>> >>> A large fraction of NATs use address/port dependent filtering which
>> >>> means that there needs to be an outgoing packet from each endpoint
>> >>> through their NAT to the other side's server reflexive IP in order to
>> >>> open the pinhole. And that means that each side needs to provide
>> >>> their address information over the signaling channel.
>> >>>
>> >>> I strongly recommend that you go read the ICE specification and
>> >>> understand the algorithms it describes. That should make clear
>> >>> why the communications patterns in WebRTC are the way they
>> >>> are.
>> >>>
>> >>> -Ekr
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Randell Jesup -- rjesup a t mozilla d o t com
>> >
>>
>

Received on Thursday, 9 January 2014 03:37:37 UTC