Re: Draft of Second Screen Presentation Working Group Charter available (was: Heads-Up: Plan for Working Group on Second Screen Presentation)

HI Anssi,

Firstly, regarding the Community Group process, I raised the use-case under
discussion here on February 7 [1] and on February 10 you seemed to agree
with the use-case [2]. Since then we have been discussing details such as
whether the UA shows the "flinging" icon or whether the site shows that
icon etc. So it is something of a surprise to hear that you think the whole
use-case (flinging to a second screen using an service-specific app on that
screen) should be out-of-scope: we've been discussing it for months.

Secondly, regarding interoperability, there are two axes to consider: UAs
may support many different protocols for discovery and sending to second
screens: HDMI, MirraCast, WiDi, AirPlay, Cast, DIAL etc. etc. For each of
these there is an interoperability question, but that is a question for
those individual protocols. We do not expect every UA to support the entire
list I gave above. The interoperability problem that is relevant to our API
is whether a site using the Presentation API can work equally well on one
UA as on another *for the case that a second screen supported by the UA is
available*. Specifically, if a site works with UA1 for flinging to an
AirPlay receiver, but cannot fling to that same receiver from UA2 because
UA2 does not support AirPlay, that is not an interopeability problem of the
Presentation API. So, I do not think consideration of additional types of
second screen affects interoperability: indeed our challenge is to design
an API which abstracts any differences between second screens and leaves it
to the UA implementation to present a simple and consistent API to web
developers.

Furthermore, I don't think it's a good idea to leave out a class of screens
now to be "bolted on" later. The user experience for the different kinds of
second screen should be identical: users and developers do not care whether
they are using MirraCast, AirPlay, Cast etc. they just want to know which
screens they can "fling" to. Based on the list discussion, the requirements
for this use-case are very simple: up-front provision of the URL by the
site so that UAs can appropriately filter displays.

Just to be clear, what I imagine in the Netflix case is that the URL is one
that would load our HTML5 player if sent to a remote HTML5 UA. But a
controller UA that supports DIAL and its integration with Presentation API
would also offer the user the choice of any devices it discovers that have
the Netflix app. The protocol that the controlling site uses over
postMessage might even be the same in both cases, although that is our
problem. [For "Netflix" here also substitute any of the 100ish-and-growing
DIAL apps [3].]

Finally, I think we should all be clear that the draft charter [4] is just
that, draft. It doesn't have any status yet. We're discussing now what
should be in or out of scope. If the current draft suggests something be
out of scope, but we agree on this list to bring it in, we'll just change
that draft.

...Mark

[1] http://lists.w3.org/Archives/Public/public-webscreens/2014Feb/0024.html
[2] http://lists.w3.org/Archives/Public/public-webscreens/2014Feb/0025.html
[3] http://www.dial-multiscreen.org/dial-registry/namespace-database
[4] http://webscreens.github.io/charter/


On Wed, May 28, 2014 at 2:07 AM, Kostiainen, Anssi <
anssi.kostiainen@intel.com> wrote:

> Hi MarkS, All,
>
> On 28 May 2014, at 11:14, Mark Scott <markdavidscott@google.com> wrote:
>
> > We agree on this principle - and I hope that this specific discussion on
> support for existing devices that can only play a limited set of content
> types doesn't detract from this.
>
> I’m happy to hear we’re aligned on the fundamentals. I have a proposal how
> to resolve this below.
>
> > You'll see this approach in what we've done to date with Chromecast -
> we'll render and send any web content to the device (using the one-UA
> model), but if there's a "custom player" available we'll use it when
> possible because it results in a better user experience.  I use YouTube and
> Netflix as examples, but this isn't about favoring large players.  Anyone
> can build one of these custom players for Chromecast at essentially no cost
> -but Roku similarly has thousands of channels, and Android-based devices
> like Fire TV support DIAL but allow any app to be installed (I think).
>  However, these relatively open devices still use "custom player" models,
> at least in part because pure HTML doesn't even define simple functions
> like how to adjust (master) volume.
> >
> > Like you, I'm optimistic we'll find an appropriate technical solution; I
> feel it's close to what's already defined, and that we need some level of
> available screen filtering regardless even to handle speaker vs. picture
> frame vs. display.  I jumped into the thread mostly to express a view that
> it's important that we define APIs for the web that work well with as many
> screens as possible.
>
> Do you think it would be reasonable for you to experiment with the
> alternative URL schemes via a separate proprietary API (spec this as an
> extension to the Presentation API) for the time being, and keep the core
> Presentation API well specified and limited to text/html?
>
> My proposed resolution to this general issue is (please let me know if
> this does not work for you):
>
> Let’s first specify and ship an API that does text/html extremely well in
> the proposed *Working Group*, and then, after we’re collectively confident
> we can do it, think about what comes next.
>
> Your input and expertise will play a key role in defining “what comes
> next”. The right venue for that type of work is the *Community Group*, as
> defined in the WG charter (see “Dependencies” [1]).
>
> I don’t want to block anyone from experimenting with cool new stuff on the
> Web platform, I just feel that we should not bring experiments to the
> standards track (that is, to a Working Group) until there's wide support.
>
> Looking back, when the Community Group was chartered, we provided concrete
> input to it in a form of a draft spec and an experimental implementation.
> Since then, wider interest emerged and it was proposed [2] this work to be
> moved to a Working Group. That is what is now explicitly mentioned in the
> WG charter in the Deliverables. I feel it would be reasonable that
> proposals that go beyond that scope should follow a similar path: incubate
> in the CG, and move to the WG when the consensus emerges and we are
> confident we can get to multiple interoperable implementations.
>
> Thanks,
>
> -Anssi
>
> [1] http://webscreens.github.io/charter/#coordination
> [2]
> http://lists.w3.org/Archives/Public/public-webscreens/2014Apr/0011.html

Received on Wednesday, 28 May 2014 15:02:36 UTC