Re: Second Screen Presentation Community Group Proposed

Wayne Carr wrote:
> I wanted to make clear what's discussed below (or in the Web & TV IG) is
> a much larger scope than what this proposed CG is for, so would be a
> different effort rather than this CG.  This CG has a narrow scope to
> solve a particular problem.  Narrow to keep it simpler for participants
> to get their needed internal approvals (e.g. for licensing).  I've also
> cc'd the folks proposing this.  (searching I don't see any discussion of
> Miracast, Wi-Di, etc in the Web & TV IG -- that's what this CG is all
> about - enabling use of those without direct low level support of any of
> them).
>
> This CG is aimed at a device that has some ways to connect to secondary
> displays (really secondary because no UI, no code execution - just
> display).  Common ways are Miracast, Airplay, Wi-Di, video port, hdmi.
> So something where it is easy for a native app to display on the second
> display - and a thin abstract Web api on top of that to let web pages
> use it.
>
> Here's an example of how it could work.
>
> - Web page asks the browser to display an url on a secondary display.
>
> - If browser knows how to display on another display (hdmi, video port,
> Miracast, WiDi, AirPlay - not in the spec, it's an implementation
> detail), it asks the user to approve using the display and to choose
> which one.
>
> - Requesting page gets back either access to the second Browsing context
> (like window.open) or just a way to exchange messages with the other
> page.  (possibly defining a few likely commonly used messages - like
> some for some of the HTMLMediaElement apis like play, pause - just for
> the displaying video or slides or things where that type of message
> makes sense)
>
> - not in the spec, but browser on initiating device could create the
> second Browsing context and use OS facilities to display it with
> Miracast (or others) converting it to video and sending it to a second
> display screen.  All the implementation is on the initiating device.
> the second screen just supports Miracast in this example - it is just
> receiving and playing video.
>
> - also not in the spec, but the browser could support other ways of
> attaching to secondary displays just for displaying video.  like with a
> Chromecast like system, the browser could tell the other device to load
> and display the URL.  It could then translate the CG standardized media
> control messages with whatever that device uses to do things like pause,
> etc.  All the spec would know about that would be there would be an
> alternative that only permits a s some defined subset of messages for
> control (that browsers could likely convert for use in Chromecast like
> systems).  So it could indicate it would restrict itself just to CG
> defined messages and not send arbitrary messages to allow the browser to
> translate those to particular underlying systems like Chromecast.
>
> Summary: a simple api to sit on top of some existing ways to display on
> secondary displays.  Page asks to display an url on a second display.
> Browser checks with user.  At a minimum, initiating page can exchange
> arbitrary messages with the second page.  may be some defined messages
> related to media control.  may be access to DOM sometimes for second
> page.   initiating page doesn't know if second page is being processed
> locally and video sent out or remotely and browser knows how to talk to
> something remote.  that's in the implementation.
>
> With how the content winds up on the other screen not in the spec, that
> would allow experimentation and standardization elsewhere for various
> ways to do that.  So trying to enable the easy things like Miracast or
> HDMI support (without the details of those) and enabling something more
> complicated in implementation or standardization is some other group or
> this one under some future charter.

AFAIU it could also be solved in a simple and (relatively) 
straightforward way using two browsers with a DIAL-based communication 
to initiate the "discussion", then continue with a REST discussion, or 
initiate a websocket if needed.

Simple, but clearly there is value in simplifying this from within the 
browser as the group proposes.

Unless I missed something in the group presentation, we'll need an 
actual browser on both screen anyway (If they plan to "stream" the 
content of the "remote popup", we're heading to a dead-end IMHO).


Regards,
JC

Received on Thursday, 7 November 2013 08:39:40 UTC