Re: Draft of Second Screen Presentation Working Group Charter available (was: Heads-Up: Plan for Working Group on Second Screen Presentation)

On 5/21/14, 7:28 AM, "Kostiainen, Anssi" <anssi.kostiainen@intel.com>
wrote:

>Hi MarkFo, All,
>
>On 20 May 2014, at 23:18, mark a. foltz <mfoltz@google.com> wrote:
>
>> Hi all, the way I think of this is divided into three cases:
>> 
>> (1) The content to be shown is an HTML document.  In this case the
>>proposal that Anssi put forward describes how this case would be
>>handled.  The controlling application would provide the URL to a page
>>that it knows how to control, which could generate the media itself or
>>take a URL to the media to play back.  The presenting and presented
>>pages would agree beforehand on the control protocol.
>
>I argue this should be the starting point for the API. For this approach
>we have concrete input submitted to the group, and this is what the CG
>has been working on to date. This is also something that I believe
>multiple implementersı are technically able to support, and we are able
>to pass the interop testing phase when we get there.

Emphatically agree.

>
>> (2) The content to be shown is an application with a well defined
>>control mechanism known to the requesting page, but is not necessarily
>>an HTML document.  In this scenario the API would work something like
>> 
>> requestSession(Œdial://netflix.com/Netflix', 'application/dial');
>> 
>> (I am making up a scheme for specifying a DIAL application, we could
>>overload the http:// scheme for this or use another type of URN.)
>> 
>> Netflix could publish the control protocol for their application or a
>>JS library to encapsulate it, if they wanted to, or keep it proprietary
>>to their site(s).
>
>Sounds a bit like repurposed navigator.registerProtocolHandler() and/or
>registerContentHandler(), or?

We are talking about (1) _or_ (2) but isnıt (1) really a special case of
(2)?

Suppose Iıve registered a Chromecast receiver that simply navigates to an
HTML page. Suppose Iıve also implemented a DIAL server on another
secondary device that can launch an app named ³browser² that will navigate
to a specified URL. In (1), the page on the primary device wants to find
secondary devices to diisplay a URL to an HTML page. I want the primary UA
to discover both the Chromecast and DIAL server secondary devices that can
render that page. The primary page authorıs desire is to discover all
secondary devices (Chromecast and DIAL) with an app that renders HTML
pages. Weıve implemented this use case.

(2) is a generalization of (1)  - the page author wants to discover
secondary devices with apps that can render other types of web content. As
MarkF suggests [1], requesting other applications, case (2), that can
render web content that is not an HTML page could be expressed by
overloading http.

Another observation from the above is that the primary page author wants
to find ALL secondary devices to display content of a particular type,
whether itıs by rendering locally and creating a video/audio stream to the
secondary device, finding a Chromecast receiver or app on a DIAL server.


>
>> (3) The content is a generic media type (such as would be shown in
>><audio> or <video>) that could be rendered in multiple ways.  I agree
>>with Louay that we don't have a good standardized control mechanism for
>>this case.  Here are a few options that come to mind.
>
>I donıt see why wrapping the media into a light-weight HTML shell would
>be a bad thing. Web developers are used to wrapping their content to be
>rendered by browsers into an HTML boilerplate.
>
>> (3a) Specify (in this WG or elsewhere) a set of high level control
>>messages that must be understood by all screens that accept generic
>>media.
>
>Instead of specifying such control messages, Iıd like us to reuse the
>existing platform features and use HTML that embeds <video> and friends.
>This allow us to reuse the control methods and associated event handlers
>defined by the HTMLMediaElement. To complete the picture, it would be a
>straight-forward task to use web messaging to let the initiating User
>Agent control the playback. I could see someone coming up with a small
>JavaScript library to do that even more easily.
>
>> (3b) Evolve the API to integrate more closely with the <video> or
>><audio> element to enable them to be presented remotely.   Control would
>>be implemented through the <video> or <audio> element (along the lines
>>of Anssi's proposal).
>
>Personally, I see this as the most web-friendly way forward: provide a
>minimalistic HTML shell that wraps the <video> or <audio> et al. and
>provide a hint in the media element that the content can be presented
>remotely.
>
>Below is an imaginary example following up from my previous example, that
>introduces a ³canbepresentedremotely² attribute that provides a hint to
>the initiating User Agent that it can try use devices that understand
>such a resource for playback:
>
><!DOCTYPE html>
><html>
><head>
> <title>Foo</title>
></head>
><body>
> <video src="http://example.org/foo.mp4" canbepresentedremotely></video>
></body>
></html>
>
>In this example it is up to the implementation to figure out how to
>ensure the device the user chooses can indeed play the media in question.
>If the initiating User Agent knows how to talk to such a device it can
>ask whether it support the content using any means available to it. For
>the best user experience, this happens behind the scenes before he user
>is able to make a choice. This process is not exposed through the
>web-facing API to the web developer.
>
>Furthermore, what I like about this web-friendly approach is it allows
>implementers to simulate a second screen with another browser window or
>tab. Good for development and debugging but also acts as a poor manıs
>second screen.
>
>> (3c) Expose the underlying mechanism for remote playback (Airplay,
>>Cast, uPnP) and assume a compatibility library can be built that
>>abstracts over the differences among them.
>
>I think we all agree this is out of scope for now?
>
>> I believe that one of these approaches will pan out, and I would feel
>>comfortable leaving generic media playback in scope of the charter.
>
>Iıd be interested in further pursuing the option (3b), if that means
>weıll bootstrap using HTML and provide hints via extensions to the
>HTMLMediaElement and friends as described above.
>
>> Also, for Cast, we have shown a good uptake of our generic media player
>>application that essentially allows sites to send a video URL to
>>Chromecast and control playback without having to write a custom
>>application for the device.  So there is some demand for this
>>functionality. [1]
>
>This is good input, thanks!
>
>Could you give us a hello world example on how this is used on a web
>page? The link provided talks about Android apps/Chrome Apps/iOS apps.
>
>Thanks,
>
>-Anssi
>
>> [1] https://developers.google.com/cast/docs/receiver_apps#default
>>"Default Media Receiver"
>

Received on Wednesday, 21 May 2014 15:48:41 UTC