Re: Draft of Second Screen Presentation Working Group Charter available (was: Heads-Up: Plan for Working Group on Second Screen Presentation)

Hi MarkFo, All,

On 20 May 2014, at 23:18, mark a. foltz <mfoltz@google.com> wrote:

> Hi all, the way I think of this is divided into three cases:
> 
> (1) The content to be shown is an HTML document.  In this case the proposal that Anssi put forward describes how this case would be handled.  The controlling application would provide the URL to a page that it knows how to control, which could generate the media itself or take a URL to the media to play back.  The presenting and presented pages would agree beforehand on the control protocol.

I argue this should be the starting point for the API. For this approach we have concrete input submitted to the group, and this is what the CG has been working on to date. This is also something that I believe multiple implementers’ are technically able to support, and we are able to pass the interop testing phase when we get there.

> (2) The content to be shown is an application with a well defined control mechanism known to the requesting page, but is not necessarily an HTML document.  In this scenario the API would work something like
> 
> requestSession(‘dial://netflix.com/Netflix', 'application/dial');
> 
> (I am making up a scheme for specifying a DIAL application, we could overload the http:// scheme for this or use another type of URN.)
> 
> Netflix could publish the control protocol for their application or a JS library to encapsulate it, if they wanted to, or keep it proprietary to their site(s).

Sounds a bit like repurposed navigator.registerProtocolHandler() and/or registerContentHandler(), or?

> (3) The content is a generic media type (such as would be shown in <audio> or <video>) that could be rendered in multiple ways.  I agree with Louay that we don't have a good standardized control mechanism for this case.  Here are a few options that come to mind.

I don’t see why wrapping the media into a light-weight HTML shell would be a bad thing. Web developers are used to wrapping their content to be rendered by browsers into an HTML boilerplate.

> (3a) Specify (in this WG or elsewhere) a set of high level control messages that must be understood by all screens that accept generic media.

Instead of specifying such control messages, I’d like us to reuse the existing platform features and use HTML that embeds <video> and friends. This allow us to reuse the control methods and associated event handlers defined by the HTMLMediaElement. To complete the picture, it would be a straight-forward task to use web messaging to let the initiating User Agent control the playback. I could see someone coming up with a small JavaScript library to do that even more easily.

> (3b) Evolve the API to integrate more closely with the <video> or <audio> element to enable them to be presented remotely.   Control would be implemented through the <video> or <audio> element (along the lines of Anssi's proposal).

Personally, I see this as the most web-friendly way forward: provide a minimalistic HTML shell that wraps the <video> or <audio> et al. and provide a hint in the media element that the content can be presented remotely.

Below is an imaginary example following up from my previous example, that introduces a “canbepresentedremotely” attribute that provides a hint to the initiating User Agent that it can try use devices that understand such a resource for playback:

<!DOCTYPE html>
<html>
<head>
 <title>Foo</title>
</head>
<body>
 <video src="http://example.org/foo.mp4" canbepresentedremotely></video>
</body>
</html>

In this example it is up to the implementation to figure out how to ensure the device the user chooses can indeed play the media in question. If the initiating User Agent knows how to talk to such a device it can ask whether it support the content using any means available to it. For the best user experience, this happens behind the scenes before he user is able to make a choice. This process is not exposed through the web-facing API to the web developer.

Furthermore, what I like about this web-friendly approach is it allows implementers to simulate a second screen with another browser window or tab. Good for development and debugging but also acts as a poor man’s second screen.

> (3c) Expose the underlying mechanism for remote playback (Airplay, Cast, uPnP) and assume a compatibility library can be built that abstracts over the differences among them.

I think we all agree this is out of scope for now?

> I believe that one of these approaches will pan out, and I would feel comfortable leaving generic media playback in scope of the charter.

I’d be interested in further pursuing the option (3b), if that means we’ll bootstrap using HTML and provide hints via extensions to the HTMLMediaElement and friends as described above.

> Also, for Cast, we have shown a good uptake of our generic media player application that essentially allows sites to send a video URL to Chromecast and control playback without having to write a custom application for the device.  So there is some demand for this functionality. [1]

This is good input, thanks!

Could you give us a hello world example on how this is used on a web page? The link provided talks about Android apps/Chrome Apps/iOS apps.

Thanks,

-Anssi

> [1] https://developers.google.com/cast/docs/receiver_apps#default "Default Media Receiver"

Received on Wednesday, 21 May 2014 13:29:14 UTC