Re: Presentation API changes proposal

I agree with Dominik's comment that having an event-based way to detect
device availability vs. relying on polling is important.  It's actually
quite common for target devices to be transient (e.g. the case of turning
on a TV), and responsiveness is important here since most sites will likely
not show any UI related to presenting on a secondary display unless a
display is known to be available.

One other comment is that it would help to clarify the scope of "session in
progress".  There are two relevant variations of this:
- Providing access to a session started by the local UA after a page
navigation/refresh.
- Providing access to a session after a UA restart, or started by a
different UA (with some check for origin, of course).

The latter can be surprisingly important; being stuck in a state where
something is being shown with no clear way to control or stop it is a
highly problematic use case.

The "different UA" variation is relevant for similar reasons, e.g. my
primary device ran out of battery, so I'm switching to an alternate device.
 I mention it in passing as I think the complexity of solving this case in
a general way may be impractically high.

Mark.


On Wed, Jan 8, 2014 at 7:17 AM, Rottsches, Dominik <
dominik.rottsches@intel.com> wrote:

> Hi Anton, Miguel, Peter,
>
> welcome to the CG - good to have you guys joining! And thanks for your
> detailed feedback and change proposals.
>
> On Tue, 2014-01-07 at 16:10 +0000, Anton Vayvod wrote:
>
>
> > Even before Google joined the Second Screen Presentation Community
> > Group, we had been closely following the development of the
> > Presentation API[1]. We would like to propose some changes to the
> > specification in order to allow not just mirroring technologies to be
> > built on top of it, but also allow media flinging technologies (like
> > Chromecast, [2]). In that case, one user agent triggers and controls
> > the content on the second screen, while a second user agent displays
> > the content and responds to the commands it receives.
>
> Yes - maybe let's clarify the terms a bit. I think the main distinction
> is the feasibility of implementing the API with either one or two user
> agents, or keeping it open to be implementable in both ways.
>
> With Google's Chromecast background, I can see your interest in editing
> the spec in a way that does not make a single UA solution obligatory.
>
> Even assuming a single UA implementation, I would perhaps not call the
> functionality "mirroring", since the single UA can prepare different
> rendering output for the first window and the presentation window.
>
> > This would have some implications on the API itself: it would become
> > possible for media to continue playing, even when the user agent that
> > triggered it is killed, for example because the associated tab has
> > been closed. Because of that, we would also need to be able to connect
> > to already in-progress sessions.
>
> That is a useful feature, I think. It's also in line with what Dean
> Jackson from Apple was suggesting during the TPAC session: We should
> keep in mind that the destination devices may have considerable
> computing power - so it seems quite straightforward to give them a
> chance to run standalone.
>
> > With that in mind, the first change we would like to propose to
> > the API is as follows:
> >
> >
> > Promise requestShow(optional DOMString url = "about:blank", optional
> > boolean infinitePlay = false); [3]
> >
> >
> > Calling requestShow with a url of a session in progress would return
> > the WindowProxy (or MessagePort) of the session in progress instead of
> > prompting the user.
>
> This sounds like a good idea to me.
> In addition to the questions that Anssi raised, I would suggest:
>
> Perhaps we can tweak the naming. Something like "stayAfterUnload",
> "persistent",
> "persistAfterUnload" or similar.
>
> To avoid the boolean, we could pass an options object, which
> would later allow other constraints on display type, resolution or
> similar, as some people suggested during TPAC.
>
> Promise requestShow(optional DOMString url = "about:blank",
>                     optional PresentationOptions)
>
> with an options object like:
>
> options = { persistent: true };
>
> > On top of this change, we’d also like to explore two other things that
> > would make the API easier to implement and use.
> >
> >
> > Promise searchSecondScreens(optional DOMString url);
> >
> >
> > This would replace displayAvailable and onDisplayAvailableChange. The
> > promise would return true if there is at least one display available
> > for this url. The implementation of the method can certainly cache
> > devices and keep a similar displayAvailable + event handler mechanism
> > internally.
> >
> >
> IMO it's a clever idea to query by URL for display availability. It
> combines the availability of storing the user's preference/previous
> allow/reject decisions with querying for existing sessions according to
> your proposal above.
>
> However, could you explain a little more what this URL here represents:
> Is this the same URL as in requestShow, i.e. a "remote screen app" page
> location href? Or is this URL more used in the sense of an application
> or organization identifier and does not actually point to a document?
>
> Would there be some cross-origin restrictions on what the URL can be? Or
> could we strip the url parameter and use the primary page's
> document.location.href as the query parameter/reference?
>
> Would the primary page have to call this function periodically to see
> displays going away? That's perhaps not the most elegant way to find
> out about a Chromecast or a Miracast display going offline, or changing
> subnet for example.
>
> > Finally we’d like to change the spec so that the Promise object can
> > return a small wrapper over MessagePort instead of a WindowsProxy.
> > Implementing a full WindowsProxy will add additional and unnecessary
> > complexity, making browser implementations harder without a need.
> >
> In my opinion that is a good direction to decouple the UAs here and
> allow single or dual UA implementations.
>
> It has a couple of implications though, which we need to solve:
>
> What we get with WindowProxy:
> - If we return a WindowProxy we can use
> Web Messaging in a straightforward way. We can just call
> seconScreen.postMessage(...) and inside the page on the secondary
> screen we can add an event listener to the message event / assign an
> onmessage handler.
> - We have an onunload event, at least for pages that
> are opened from same origin.
>
> Unfortunately, MessagePorts do not have onclose events anymore.
>
> Now, if we change this to let's say the following object as the result
> of the Promise returned from the call to requestShow():
> PresentationWindow {
>    EventHandler onclose;
>    MessagePort port;
> }
>
> We would have such an onclose event and we have communication from
> primary page to the secondary one. But where does the other end of the
> MessagePort go, where does it surface on the presentation window end?
>
> One possibility is to add another event "onconnected" or similar to the
> navigator.presentation object and deliver a MessagePort there? This
> event would fire only on pages that are opened as "receiver
> applications" in Chromecast terms. And this page's onunload would
> correspond to the PresentationWindow's onclose for example.
>
> Looking forward to hearing thoughts and suggestions, especially on how
> to solve those issues in the previous paragraphs,
>
> Dominik
> >
> >
> > [1] http://webscreens.github.io/presentation-api/
> > [2] http://www..google.com/intl/en-GB/chrome/devices/chromecast/
> > [3] http://dom.spec.whatwg.org/#promises
>
>

Received on Thursday, 9 January 2014 09:14:08 UTC