W3C home > Mailing lists > Public > public-webscreens@w3.org > January 2014

Re: Presentation API changes proposal

From: Wesley Johnston <wjohnston@mozilla.com>
Date: Fri, 10 Jan 2014 11:00:17 -0800 (PST)
To: Anton Vayvod <avayvod@google.com>
Cc: public-webscreens@w3c.org, Miguel Garcia <miguelg@google.com>, Peter Beverloo <beverloo@google.com>
Message-ID: <708820329.2497033.1389380417792.JavaMail.zimbra@mozilla.com>
I'm curious, what does this second UA proposal solve that isn't already covered by the Network Service Discovery API [1]? It looks like it would provide enough information to detect a second UA device on the network, along with the ability to communicate with the device (or you could set up a peer connection).

I guess one disadvantage is that every device could expose a different API. This (AFAICT) essentially pushes that responsibility onto the UA? I worry a bit about trying to get different UA's to communicate to each other through MessagePort, but I'm not incredibly versed in it.

- Wes

[1] http://www.w3.org/TR/discovery-api/

----- Original Message -----
From: "Anton Vayvod" <avayvod@google.com>
To: public-webscreens@w3c.org
Cc: "Miguel Garcia" <miguelg@google.com>, "Peter Beverloo" <beverloo@google.com>
Sent: Tuesday, January 7, 2014 8:10:33 AM
Subject: Presentation API changes proposal

Dear all,

Even before Google joined the Second Screen Presentation Community Group,
we had been closely following the development of the Presentation API[1].
We would like to propose some changes to the specification in order to
allow not just mirroring technologies to be built on top of it, but also
allow media flinging technologies (like Chromecast, [2]). In that case, one
user agent triggers and controls the content on the second screen, while a
second user agent displays the content and responds to the commands it
receives.

This would have some implications on the API itself: it would become
possible for media to continue playing, even when the user agent that
triggered it is killed, for example because the associated tab has been
closed. Because of that, we would also need to be able to connect to
already in-progress sessions.


With that in mind, the first change we would like to propose to the API is
as follows:

Promise requestShow(optional DOMString url = "about:blank", optional
boolean infinitePlay = false); [3]

Calling requestShow with a url of a session in progress would return the
WindowProxy (or MessagePort) of the session in progress instead of
prompting the user.


On top of this change, we’d also like to explore two other things that
would make the API easier to implement and use.

Promise searchSecondScreens(optional DOMString url);

This would replace displayAvailable and onDisplayAvailableChange. The
promise would return true if there is at least one display available for
this url. The implementation of the method can certainly cache devices and
keep a similar displayAvailable + event handler mechanism internally.


Finally we’d like to change the spec so that the Promise object can return
a small wrapper over MessagePort instead of a WindowsProxy. Implementing a
full WindowsProxy will add additional and unnecessary complexity, making
browser implementations harder without a need.


Please do let us know if these changes sound too dramatic or unfeasible. We
are completely open to ideas and would be happy to be involved in further
discussions!

Best Regards

Miguel, Anton and Peter

[1] http://webscreens.github.io/presentation-api/
[2] http://www.google.com/intl/en-GB/chrome/devices/chromecast/
[3] http://dom.spec.whatwg.org/#promises
Received on Friday, 10 January 2014 19:00:45 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:23:10 UTC