Re: Screen sharing and application control

What I think he is requesting is that this not be done automatically, but
instead need some local code that proxy that remote input. This is easily
feasable sending messages over WebRTC and generate the mouse events from
them, but the dangerous thing is about be able to generate this events
natively and outside the browser (the full desktop). I believe the key is
that: we can asume that everything that happens on a browser tab is
sandboxed and potentially secure by default (like tab-sharing as showed
before, and also webcams or geolocation), but everything that goes beyond
this (like desktop sharing or full-screen) should be considered as
dangerous since it can lie to the user to steal their data, both directly
(desktop sharing) or indirectly (fishing via full-screen).

Send from my Samsung Galaxy Note II
El 27/11/2013 08:10, "Steve Kann" <stevek@stevek.com> escribió:

>
> [1] Speaking of control, I hope that we never, ever, ever provide a
> site with the ability to control keyboard or mouse input.
>
>
>
> But isn’t this an incredibly valuable and powerful thing to do?   Usecases
> abound, from “help me with my computer program”, to “let’s edit this
> document together”, to “I need technical support for my computer”.
>
> Clearly, this can be problematic.   But these are legitimate use cases,
> which browsers will (and already do) provide for, though plugins and other
> means.   Surely one can devise a UX which is at least as secure, if only
> because the scope of privilege (allow this site to inject keyboard/mouse
> events) will be smaller than what one typically grants to plugins (allow
> this plugin to do anything at all).
>
> -SteveK
>
>
>
>

Received on Wednesday, 27 November 2013 07:27:31 UTC