Re: Draft of Second Screen Presentation Working Group Charter available (was: Heads-Up: Plan for Working Group on Second Screen Presentation)

*sorry, quoting issues again, I thought I had converted it to text, please prefer text only posts on the list*

Hi MarkW,

> The use-case I am concerned with is "Media Flinging" as described here:https://www.w3.org/community/webscreens/wiki/API_Discussion
> 
> There are millions of devices in the field which support this use-case today using service-specific apps on the second screen and protocols such as DIAL. If a desktop browser chooses to support those discovery protocols then it makes sense to expose support for this use case to websites in the same way as the case where the remote device supports a general-purpose web browser (the use-case and user experience is the same) i.e. using the Presentation API.

Let’s try to break this down to something more concrete: DIAL is a discovery protocol - in itself not enough to implement flinging/casting etc. 
Can you give examples of such devices and what protocols (after the DIAL discovery stage) and methods they would support for bringing web content to a screen? In which would a possible web API for this have to look differently from what we have right now?

A user agent can of course support DIAL (or really any other useful protocol for the use case) under the abstractions of Presentation API for finding a screen that supports receiving a video stream or supports rendering web content. The point is: This would be hidden from the web developer, who would only need to care about familiar ways of generating content.

> As with the remote web browser case, the control protocol - once communication is established - is site-specific.

Seems to me that this is exactly what NSD is about.

> Regarding the case of content-type-specific second screens (for example, a picture frame which can render only image/jpeg), I agree there are some problems with respect to content-type negotiation and the control protocol. These problems might be out-of-scope for our group. But I would expect that if those problems are solved elsewhere (by standard protocols for such content-type-specific renderers) then browsers ought to be able to expose support for those devices through the Presentation API.

I don’t understand this paragraph: Solving the content-type based discovery issues is out of scope for us, and we should wait (indefinitely?) until someone else solves that for us, then integrate it? This does not seem like a very practical approach for us to write a spec that implementors would like to adopt. If we can avoid the trouble while still realising our use cases, in my opinion, that’s a better approach to follow.

Dominik

Received on Friday, 23 May 2014 12:24:13 UTC