Re: Draft of Second Screen Presentation Working Group Charter available (was: Heads-Up: Plan for Working Group on Second Screen Presentation)

On Fri, May 23, 2014 at 5:22 AM, Rottsches, Dominik <
dominik.rottsches@intel.com> wrote:

> *sorry, quoting issues again, I thought I had converted it to text, please
> prefer text only posts on the list*
>
> Hi MarkW,
>
> > The use-case I am concerned with is "Media Flinging" as described here:
> https://www.w3.org/community/webscreens/wiki/API_Discussion
> >
> > There are millions of devices in the field which support this use-case
> today using service-specific apps on the second screen and protocols such
> as DIAL. If a desktop browser chooses to support those discovery protocols
> then it makes sense to expose support for this use case to websites in the
> same way as the case where the remote device supports a general-purpose web
> browser (the use-case and user experience is the same) i.e. using the
> Presentation API.
>
> Let’s try to break this down to something more concrete: DIAL is a
> discovery protocol - in itself not enough to implement flinging/casting etc.
> Can you give examples of such devices and what protocols (after the DIAL
> discovery stage) and methods they would support for bringing web content to
> a screen? In which would a possible web API for this have to look
> differently from what we have right now?
>

Once the UA has discovered a remote application via DIAL (or other means) I
would expect it to enable direct communication​ between the web page and
the remote application. This could appear on the Presentation API just as
we have it today with postMessage. Just as for communication with a remote
web page, the format of the messages would be applications-specific.

There would be a need for any given protocol, like DIAL, to specify how
messages from/to the presentation API are actually transported to/from the
remote application. That would be a matter for the owners of those
specifications, but an option that would work with some existing devices
would be web sockets (we could think about the Presentation API actually
returning a WebSocket object, but that's just one option).

The only difference I see in the Presentation API is the one I have already
raised: the UA needs to know the URL, or URL pattern, that the page wants
to fling/cast before it can indicate whether there are screens available.
This is so that the UA can filter out remote screens which are not capable
of rendering that URL when determining availability. Specifically, if the
only screen visible is one with only a YouTube app, then a site asking to
render http://www.netflix.com URLs would not be informed of any available
screens.



>
> A user agent can of course support DIAL (or really any other useful
> protocol for the use case) under the abstractions of Presentation API for
> finding a screen that supports receiving a video stream or supports
> rendering web content. The point is: This would be hidden from the web
> developer, who would only need to care about familiar ways of generating
> content.
>
> > As with the remote web browser case, the control protocol - once
> communication is established - is site-specific.
>
> Seems to me that this is exactly what NSD is about.
>

​We may be getting confused as to the term "control protocol". I am
referring to the site / application-specific messaging between ​local and
remote web pages, or equally between local page and remote app: the format
of the messages sent over the postMessage method of the Presentation API.

There is a layer below: the transport protocol for these messages. There
could be a variety of such transport protocols and I am not sure they are
in scope for this group. For DIAL devices, for example, we could say we
will use WebSockets. Google CAST have their own protocol (though that may
also be web sockets IIRC). AirPlay have theirs etc.

NSD provides raw access to the discovery protocols alone. As I mentioned,
it seems unlikely to gain wide traction because of the security and privacy
issues. We need something much higher level like the Presentation API.


>
> > Regarding the case of content-type-specific second screens (for example,
> a picture frame which can render only image/jpeg), I agree there are some
> problems with respect to content-type negotiation and the control protocol.
> These problems might be out-of-scope for our group. But I would expect that
> if those problems are solved elsewhere (by standard protocols for such
> content-type-specific renderers) then browsers ought to be able to expose
> support for those devices through the Presentation API.
>
> I don’t understand this paragraph: Solving the content-type based
> discovery issues is out of scope for us, and we should wait (indefinitely?)
> until someone else solves that for us, then integrate it? This does not
> seem like a very practical approach for us to write a spec that
> implementors would like to adopt. If we can avoid the trouble while still
> realising our use cases, in my opinion, that’s a better approach to follow.
>

What I meant is that we can provide support for this use-case in our API
and UA and device implementors will then be able to integrate whatever
generic media player functionality they develop with that. We probably have
to make an assumption that content-type issues will be solved with more
expressive content-types.

I'm not saying people should wait to solve those problems - those should
certainly be worked on asap - just not in this group.

MarkF have an example of existing generic player functionality which is
provoking developer interest, so it would seem we already have a concrete
example to work from.

The API impact for Presentation API may be as simple as providing a Content
Type with, or instead of, the URL that kicks off the discovery process. The
results are then filtered by content-type capability as well as by the
capability to render specific URL patterns.

...Mark



>
> Dominik
>

Received on Friday, 23 May 2014 15:54:53 UTC