Re: What is missing for building "real" services?

> 2. I also see that there is a "auto-mute" feature being implemented that
> depend on an arbitrary threshold. It might be interested (but overkill?), to
> give user the capacity to set that limit (currently 50k I guess) somehow.
>
>
> Pointer to this auto-mute implemetation?

http://code.google.com/p/webrtc/issues/detail?id=2436

in practice that would be the equivalent of a lower limit.

> Right now they have the same "priority", but really audio is typically
> fixed, so the video reacts to changes in the apparent level of
> delay/buffering.  What you may be seeing is better (or less-obvious) error
> control and recovery in the video; the eye is often less sensitive to things
> like dropped frames than the ear.
>
> I'd love to see a trace/packet-capture/screen-scrape-recording where you see
> that apparent behavior.

I will try, but it's not 100% reproducible, even though it happens often.

> - call controls like mute / hold
> Right now, you can mute a local stream, but it does not seem to be possible
> to let the remote peers know about the stream being muted. We ended up
> implementing a specific off band message for that, but we believe that the
> stream/track could carry this information. This is more important for video
> than audio, as a muted video stream is displayed as a black square, while a
> muted audio as no audible consequence. We believe that this mute / hold
> scenario will be frequent enough, that we should have a standardized way of
> doing it, or interop will be very difficult.
>
>
> There is no underlying standard in IETF for communicating this; it's
> typically at the application level.  And while we don't have good ways in
> MediaStream to do this yet, I strongly prefer to send an fixed image when
> video-muted/holding.  Black is a bad choice….

I don't think there is a good choice of content, really. I think we
should let the application decide what to do, but we should still emit
an event to let the application know when it happens. What about an
onMute event? It seems to be in some spec, but is never fired on
remote stream/track.

> - screen/application sharing
> We are aware of the security implications, but there is a very very strong
> demand for screen sharing. Beyond screen sharing, the capacity to share the
> displayed content of a given window of the desktop would due even better.
> Most of the time, users only want to display one document, and that would
> also reduce the security risk by not showing system trays. Collaboration
> (the ability to let the remote peer edit the document) would be even better,
> but we believe it to be outside of the scope of webRTC.
>
>
> yes, and dramatically more risky.  Screen-sharing and how to preserve
> privacy and security is a huge problem.  Right now the temporary kludge is
> to have the user whitelist services that can request it (via extensions
> typically)

The presentation use case is very, very important important: it's a
showstopper for many customer, and it is an entire market in itself,
especially for education and conferencing (as in broadcasting speakers
at a conference). It is going to be there in any case. I don't want to
have to complement my webrtc implementation by a plugin just for that
like some vendors do (weemo, ..), especially when there is an
implementation in chrome of the corresponding feature, and when the
discussion is now only about how, when and to whom making it
available. The pledge is no plug in, no download. We believe the
solution implemented right now in chrome for full screen sharing,
which require 1. a flag to be enabled, 2. to connect over https, and
3. to click on a prompt to provide permission, add some burden but
would be acceptable.

Received on Wednesday, 8 January 2014 23:41:54 UTC