Re: What is missing for building "real" services?

Eric,

I'm not trying to solve all use-cases. I'm saying that we have:

 1. Cases that require read-only screen sharing to content from the same
    origin, which can be exposed safely without a plugin.
 2. Cases that require more security-sensitive operations (e.g.
    co-browsing) which can be exposed behind a plugin for now.

I believe that #1 is sufficient to implement an online learning 
application, and there is certainly a lot of demand for that right now.

Gili


On 09/01/2014 10:58 AM, Eric Rescorla wrote:
> What same origin policy? SOP doesn't say anything about screen sharing.
>
> If you mean the policy you suggested the other day that you can't screen
> share any content not from your own origin, no, that's not acceptable.
> There are plenty of contexts in which one wishes to screen share other
> origins, and even if you are sharing your own site, many sites mashup
> other sites and that content needs to be shared.
>
> -Ekr
>
>
>
> On Wed, Jan 8, 2014 at 9:57 PM, cowwoc <cowwoc@bbs.darktech.org> wrote:
>> And that's fine, so long as you respect the Same Origin policy without
>> exceptions, right?
>>
>> Gili
>>
>>
>> On 09/01/2014 12:45 AM, Eric Rescorla wrote:
>>> People also want to be able to share the browser.
>>>
>>> -Ekr
>>>
>>>
>>> On Wed, Jan 8, 2014 at 9:39 PM, cowwoc <cowwoc@bbs.darktech.org> wrote:
>>>> Okay, so here is my second attempt at this:
>>>>
>>>> We should be able to share any part of the display that the application
>>>> does
>>>> not control. Meaning, the webapp might allow users to share the contents
>>>> of
>>>> Excel so long as it has no control over what gets displayed by Excel.
>>>> Similarly, it should be allowed to share any browser tab so long as it
>>>> plays
>>>> within its own host/origin.
>>>>
>>>> Assuming that co-browsing is a non-goal for now, is the above (read-only
>>>> screen sharing) safe from a security point of view?
>>>>
>>>> Gili
>>>>
>>>>
>>>> On 09/01/2014 12:19 AM, Alex Gouaillard wrote:
>>>>> importance of the interest:
>>>>> We, and the 50 Millions pool of clients we already serve, want this
>>>>> scenario (full screen sharing), even though we would prefer the
>>>>> version where only the display of a given (potentially masked on the
>>>>> origin computer own desktop) window is shared. I cannot speak for
>>>>> others, but I remember seeing quite a few hints of interest on the
>>>>> mailing list, and some experiments with chrome screen sharing seem
>>>>> pretty popular out there. Some of the video conferencing product we
>>>>> used to sell (*cough*vidyo*cough*) and others already propose this
>>>>> functionality and it was the main sales point. Many of
>>>>> not-yet-customers have expressed utmost interest in the use case
>>>>> described below for either education purpose, or in hospital
>>>>> environment (regulation are different for tablets, and iPad with a
>>>>> specific casing are allow in surgery. What was only prototyping in
>>>>> Research Units when I was at Harvard Medical School (2008-ish) is now
>>>>> a reality in "standard" hospital and radiology Units as well.
>>>>>
>>>>> use case / scenario:
>>>>> The most usual case is sharing a presentation, table, or text document
>>>>> as a stream in a multi stream (document display + self video + self
>>>>> audio + potentially other stuff) call. Screen sharing allows to share
>>>>> the document, but then, you don't see yourself. Sharing separate
>>>>> window content (as in desktop composition's window), would allow a
>>>>> better presentation experience for the sender, who who be able to see
>>>>> the document he is sending (as a local stream), and himself, basically
>>>>> mirroring what the remote peer could see.
>>>>>
>>>>> On Thu, Jan 9, 2014 at 12:02 PM, Eric Rescorla <ekr@rtfm.com> wrote:
>>>>>> On Wed, Jan 8, 2014 at 7:52 PM, cowwoc <cowwoc@bbs.darktech.org> wrote:
>>>>>>> Remind me again, what was wrong with this approach?
>>>>>> It doesn't enable essentially any screen sharing scenario that
>>>>>> people want.
>>>>>>
>>>>>> -Ekr
>>>>>>
>>>>>>> Enable screensharing without a flag/plugin.
>>>>>>> Prompt the user for permission.
>>>>>>> Allow screensharing for a single browser tab (can't capture the
>>>>>>> general
>>>>>>> screen or foreign processes).
>>>>>>> Prevent pages that use screensharing from issuing requests to foreign
>>>>>>> hosts
>>>>>>> (i.e. Same Origin policy minus any exceptions).
>>>>>>>
>>>>>>> Lets start with something that is fairly restrictive (but doesn't
>>>>>>> require a
>>>>>>> flag/plugin which kills traction), enable *some* use-cases, and built
>>>>>>> up
>>>>>>> from there.
>>>>>>>
>>>>>>> Gili
>>>>>>>
>>>>>>>
>>>>>>> On 08/01/2014 9:03 PM, Eric Rescorla wrote:
>>>>>>>
>>>>>>> On Wed, Jan 8, 2014 at 5:53 PM, piranna@gmail.com <piranna@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> I'm not ccomparing both in the way I accept whatever of both, but
>>>>>>> instead in
>>>>>>> the way both (plugins and flags) are equally bad ideas. Screen and
>>>>>>> application sharing should be included and enabled on browsers by
>>>>>>> default,
>>>>>>> and not hidden behind a flag or whatever other method.
>>>>>>>
>>>>>>> For the reasons described in:
>>>>>>> http://tools.ietf.org/html/draft-ietf-rtcweb-security-05#section-4.1.1
>>>>>>>
>>>>>>> The browser vendors don't think this is that great an idea.
>>>>>>>
>>>>>>> If you think that screen sharing should be available by default, you
>>>>>>> should perhaps suggest some security mechanisms which would
>>>>>>> make the threats described here less severe.
>>>>>>>
>>>>>>> -Ekr
>>>>>>>
>>>>>>> Send from my Samsung Galaxy Note II
>>>>>>>
>>>>>>> El 09/01/2014 02:42, "Alex Gouaillard"
>>>>>>> <alex.gouaillard@temasys.com.sg>
>>>>>>> escribió:
>>>>>>>
>>>>>>> @ piranha.
>>>>>>>
>>>>>>> while I agree with you for social users and most of the population out
>>>>>>> there, the difference between clicking a flag and installing a plugin
>>>>>>> is the process required by IT teams to accept the product and deploy
>>>>>>> it in an enterprise environment. Everything needs to validated
>>>>>>> beforehand, including (especially?) plugins. They have a very long
>>>>>>> list of products to screen and maintain, and are very reluctant to add
>>>>>>> yet another one. Moreover, google's chrome start with a higher
>>>>>>> credibility than any small or medium sized company's plugin.
>>>>>>>
>>>>>>> On Thu, Jan 9, 2014 at 8:54 AM, Silvia Pfeiffer
>>>>>>> <silviapfeiffer1@gmail.com> wrote:
>>>>>>>
>>>>>>> On Thu, Jan 9, 2014 at 10:10 AM, Randell Jesup
>>>>>>> <randell-ietf@jesup.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>> On 1/7/2014 8:50 PM, Alexandre GOUAILLARD wrote:
>>>>>>>
>>>>>>> here are a few proposition on things that are really biting us, and
>>>>>>> how
>>>>>>> to
>>>>>>> (perhaps) make it easier:
>>>>>>>
>>>>>>> - bandwidth control
>>>>>>> 1. It seems that the number one sdp munging cause is the now infamous
>>>>>>> B=AS:
>>>>>>> line to put a cap on bandwidth. Since that capacity exists in the
>>>>>>> underlying
>>>>>>> code, it would be great to have an API that can help us put caps,
>>>>>>> either on
>>>>>>> each stream, and/or on the full call.
>>>>>>>
>>>>>>>
>>>>>>> yes.
>>>>>>>
>>>>>>>
>>>>>>> 2. I also see that there is a "auto-mute" feature being implemented
>>>>>>> that
>>>>>>> depend on an arbitrary threshold. It might be interested (but
>>>>>>> overkill?), to
>>>>>>> give user the capacity to set that limit (currently 50k I guess)
>>>>>>> somehow.
>>>>>>>
>>>>>>>
>>>>>>> Pointer to this auto-mute implemetation?
>>>>>>>
>>>>>>>
>>>>>>> 3. Additionally, and perhaps not unrelated, we would alike to be able
>>>>>>> to
>>>>>>> decide what happen when bandwidth goes down. Right now it feels like
>>>>>>> the
>>>>>>> video has the priority over the audio. We would like to be able to
>>>>>>> explicitly set the audio priority higher than the video in the
>>>>>>> underlying
>>>>>>> system, as opposed to implement a stats listener, which triggers
>>>>>>> re-negotiation (with the corresponding O/A delay) when bandwidth goes
>>>>>>> below
>>>>>>> a certain threshold.
>>>>>>>
>>>>>>>
>>>>>>> Right now they have the same "priority", but really audio is typically
>>>>>>> fixed, so the video reacts to changes in the apparent level of
>>>>>>> delay/buffering.  What you may be seeing is better (or less-obvious)
>>>>>>> error
>>>>>>> control and recovery in the video; the eye is often less sensitive to
>>>>>>> things
>>>>>>> like dropped frames than the ear.
>>>>>>>
>>>>>>> I'd love to see a trace/packet-capture/screen-scrape-recording where
>>>>>>> you see
>>>>>>> that apparent behavior.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> - call controls like mute / hold
>>>>>>> Right now, you can mute a local stream, but it does not seem to be
>>>>>>> possible
>>>>>>> to let the remote peers know about the stream being muted. We ended up
>>>>>>> implementing a specific off band message for that, but we believe that
>>>>>>> the
>>>>>>> stream/track could carry this information. This is more important for
>>>>>>> video
>>>>>>> than audio, as a muted video stream is displayed as a black square,
>>>>>>> while a
>>>>>>> muted audio as no audible consequence. We believe that this mute /
>>>>>>> hold
>>>>>>> scenario will be frequent enough, that we should have a standardized
>>>>>>> way of
>>>>>>> doing it, or interop will be very difficult.
>>>>>>>
>>>>>>>
>>>>>>> There is no underlying standard in IETF for communicating this; it's
>>>>>>> typically at the application level.  And while we don't have good ways
>>>>>>> in
>>>>>>> MediaStream to do this yet, I strongly prefer to send an fixed image
>>>>>>> when
>>>>>>> video-muted/holding.  Black is a bad choice....
>>>>>>>
>>>>>>> It would be nice if browsers sent an image, such as "video on hold" -
>>>>>>> just like they provide default 404 page renderings. This is a quality
>>>>>>> of implementation issue then. Maybe worth registering a bug on
>>>>>>> browsers. But also might be worth a note in the spec.
>>>>>>>
>>>>>>>
>>>>>>> - screen/application sharing
>>>>>>> We are aware of the security implications, but there is a very very
>>>>>>> strong
>>>>>>> demand for screen sharing. Beyond screen sharing, the capacity to
>>>>>>> share
>>>>>>> the
>>>>>>> displayed content of a given window of the desktop would due even
>>>>>>> better.
>>>>>>> Most of the time, users only want to display one document, and that
>>>>>>> would
>>>>>>> also reduce the security risk by not showing system trays.
>>>>>>> Collaboration
>>>>>>> (the ability to let the remote peer edit the document) would be even
>>>>>>> better,
>>>>>>> but we believe it to be outside of the scope of webRTC.
>>>>>>>
>>>>>>>
>>>>>>> yes, and dramatically more risky.  Screen-sharing and how to preserve
>>>>>>> privacy and security is a huge problem.  Right now the temporary
>>>>>>> kludge
>>>>>>> is
>>>>>>> to have the user whitelist services that can request it (via
>>>>>>> extensions
>>>>>>> typically)
>>>>>>>
>>>>>>> Yeah, I'm really unhappy about the screen sharing state of affairs,
>>>>>>> too. I would much prefer it became a standard browser feature.
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Silvia.
>>>>>>>
>>>>>>>       Randell
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> - NAT / Firewall penetration feedback - ICE process feedback
>>>>>>> Connectivity is a super super pain to debug, and the number one cause
>>>>>>> of
>>>>>>> concern.
>>>>>>> 1. The 30s time out on chrome generated candidate is biting a lot of
>>>>>>> people.
>>>>>>> The time out is fine, but there should be an error message that
>>>>>>> surfaces
>>>>>>> (see 5)
>>>>>>> 2. Turn server authentication failure does not generate an error, and
>>>>>>> should
>>>>>>> (see 5)
>>>>>>> 3. ICE state can stay stuck in "checking" forever even after all the
>>>>>>> candidate have been exhausted
>>>>>>> 4. Not all ICE states stated in the spec are implemented (completed?
>>>>>>> fail?)
>>>>>>> 5. It would due fantastic to be able to access the list of candidates,
>>>>>>> with
>>>>>>> their corresponding status (not checked, in use, failed, ….) with the
>>>>>>> cause
>>>>>>> for failure
>>>>>>> 6. In case of success, it would be great to know which candidate is
>>>>>>> being
>>>>>>> used (google does that with the googActive thingy) but also what is
>>>>>>> the
>>>>>>> type
>>>>>>> of the candidate. Right now, on client side, at best you have to go to
>>>>>>> chrome://webrtc-internals, get the active candidate, and look it up
>>>>>>> from the
>>>>>>> list of candidates. When you use a TURN server as a STUN server too,
>>>>>>> then
>>>>>>> the look up is not an isomorphism.
>>>>>>>
>>>>>>> right now, the only way to understand what's going on is to have a
>>>>>>> "weaponized" version of chrome, or a native app, that gives you access
>>>>>>> to
>>>>>>> the ICE stack, but we can not expect clients to deploy this, nor to
>>>>>>> automate
>>>>>>> it. Surfacing those in an API would allow one to:
>>>>>>> - adapt the connection strategy on the fly in an iterative fashion on
>>>>>>> client
>>>>>>> side.
>>>>>>> - report automatically the problems and allow remote debug of failed
>>>>>>> calls,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jan 7, 2014 at 2:15 AM, Eric Rescorla <ekr@rtfm.com> wrote:
>>>>>>>
>>>>>>> On Mon, Jan 6, 2014 at 10:10 AM, piranna@gmail.com <piranna@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> That's not really going to work unless you basically are on a
>>>>>>> public
>>>>>>> IP address with no firewall. The issue here isn't the properties of
>>>>>>> PeerConnection but the basic way in which NAT traversal algorithms
>>>>>>> work.
>>>>>>>
>>>>>>> I know that the "IP and port" think would work due to NAT, but
>>>>>>> nothing
>>>>>>> prevent to just only need to exchange one endpoint connection data
>>>>>>> instead of both...
>>>>>>>
>>>>>>> I don't know what you are trying to say here.
>>>>>>>
>>>>>>> A large fraction of NATs use address/port dependent filtering which
>>>>>>> means that there needs to be an outgoing packet from each endpoint
>>>>>>> through their NAT to the other side's server reflexive IP in order to
>>>>>>> open the pinhole. And that means that each side needs to provide
>>>>>>> their address information over the signaling channel.
>>>>>>>
>>>>>>> I strongly recommend that you go read the ICE specification and
>>>>>>> understand the algorithms it describes. That should make clear
>>>>>>> why the communications patterns in WebRTC are the way they
>>>>>>> are.
>>>>>>>
>>>>>>> -Ekr
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Randell Jesup -- rjesup a t mozilla d o t com
>>>>>>>
>>>>>>>

Received on Thursday, 9 January 2014 20:14:47 UTC