Re: Introduction, Use-case and a comment

On Mon, Feb 10, 2014 at 2:45 AM, Kostiainen, Anssi <
anssi.kostiainen@intel.com> wrote:

> Hi Mark, All,
>
> On 07 Feb 2014, at 18:18, Mark Watson <watsonm@netflix.com> wrote:
>
> > Hi everyone,
> >
> > I recently joined this group, representing Netflix. There are many
> devices already deployed which support "second screen" presentation of
> Netflix content. Presently the "controller" for these devices is the
> Netflix application running on a phone or tablet. We would love for our
> website to be able to act as a controller as well and the Presentation API
> seems like a great way to enable that.
>
> Great to have your expertise in the group, welcome!
>
> > Our use-case is essentially the same as the "flinging" one outlined on
> the wiki: a user visits www.netflix.com, selects some content and starts
> playback. The user has a TV that supports Netflix and their UA can discover
> this TV (for example using DIAL). The user is shown a familiar icon for
> "flinging" content to another screen. The user clicks that icon and is
> shown a list of devices, including their TV. The user selects the TV and
> the content begins playing on the TV. The user can control playback on the
> TV using the website.
>
> Yes, this sounds like the same use case described in the wiki [1].
>
> > I have one comment / question about the API: it seems to me that a site
> should have no visibility of the existence or name of a device without user
> permission.
>
> Correct on the names. I think we have not yet settled on whether we should
> provide a boolean for "one or more displays are available" without user
> permission (see Dominik's summary at [2]).
>

I know that the privacy people are concerned about every additional bit of
information that could be used for fingerprinting. So even if it's one bit,
it's potentially a concern. I believe also that some people consider this
a lost cause, though.

I'm not sure exactly what a site would use this one bit for, though ? If it
doesn't yet have user permission, there's nothing it can do.


> > It also seems to me that the permission (in the above use-case) is given
> when the user selects a device from the drop-down list. It would be a bad
> user experience to need a separate permission dialog.
>
> In the use case described in the wiki [1], user consent must be acquired
> before web content gets any information from the devices (including, the
> existence of any such devices).
>
> That said, it is an implementation detail how the UA represents the user
> interface for picking the device to the user. It could be a drop-down list,
> but also something more integrated with the system and its user interaction
> design for better user experience. For example, on a touch-driven device,
> the user could perhaps drag the web content to be "flinged" on top of an
> icon representing the screen to be used.
>
> The user experience could be further improved if the UA is able to
> remember the user's permission grant. However, there are known issues to be
> addressed in this approach, outlined by Mark in his recent mail to the list
> [3].
>

It's not clear to me why the act of selecting the display - however it is
done - is not itself the "user permission". Why is there a need for a
separate permission step that might be remembered ?


>
> > Some consequences of the above:
> > - the "flinging" icon needs to be shown by the UA, not the site.
> Otherwise the site is given the knowledge that there are devices available,
> before the user has given permission
>
> The UA could indeed give a hint to the user (and not the web content) that
> there are secondary screens available, so the user known the devices are
> ready already before she navigates to a site using such a feature.
>

That wasn't quite what I was getting at. I would think that the indication
to the user should only happen when you are at a site that supports the
second screen feature. When you visit such a site the site indicates to the
UA that it supports second screen and this causes the UA to discover
devices and display whatever UA affordance allows selection of devices.


>
> On the Web today, sites must be designed to work with an assumption that a
> particular feature may not be available, and must build their user
> experience around that. For example, a maps application using geolocation
> may provide a UI for enabling the feature, even if the feature may not be
> available (e.g. device missing GPS hardware, the user not granting access).
> Similarly, if the site depends on e.g. getUserMedia, a reasonable fallback
> mechanism must be in place.
>
> > - the list of devices needs to be shown by the UA, not the site.
>
> Correct. Mark outlined the concern with providing a list of device names
> to the site at [3].
>
> > - the events sent to the site are less "device discovered" events and
> more "device selected" events.
>
> True. The API proposal in the wiki is not yet updated to match the updated
> use case. I agree we should rename the event to better reflect reality.
>
> > The site must indicate to the UA that it supports second-screen
> presentation,
>
> By invoking the getScreen() method (consider methods names as
> placeholders) the site informs the UA it supports -- IOW would like to use
> -- the feature.
>

It's actually more "would like to use". In the case of Netflix, we might
call this when we get into playback mode and not during browsing mode when
the user is choosing content.


>
> Do you think there should be another, more explicit hint? For example, to
> allow the UA to trigger the device discovery process in the background
> ahead the getScreen() invocation?
>

That might be interesting, yes. If there is a "ready to use second screen"
indication - that causes the UA to offer the UI affordance for device
selection (e.g. lit up ChromeCast/AirPlay icon) - then it might be good to
have device discovery happen earlier than this, so that when the site is
ready the icon/menu can be ready right away too.


>
> > but after that the next thing it will know is when the user has actually
> selected a device. This could be a long time after the UA has discovered
> the device and lit up the "flinging" icon.
>
> Given the use case [1], the discovery process may still be in progress
> when the UA is already showing the "pick a screen" user interface to the
> user. That is to say, the list of devices shown to the user may be
> dynamically updated while the user interface is already visible (somewhat
> similarly to e.g. how Wi-Fi access point discovery is represented to the
> user in many systems). Finally, when the user picks a device, a "selected"
> event would fire.
>
> Do you have a specific concern with this approach?
>

No, it sounds good. My point was just that the event to the site is
"device selected" not "device discovered".


>
> > From a user experience point of view, the different use-cases should all
> be presented the same way. I would imagine that Chrome would use the same
> "Cast" icon they already use, Safari would use the "AirPlay" icon etc. Of
> course I don't speak for those guys and they may have their own opinions,
> but from a user perspective I don't care whether I am "flinging" to a
> Chromecast, YouTube app, Netflix App, AirPlay Receiver or having the
> content rendered locally and sent via Mirracast - I just want to the other
> screen.
>
> Looking at the use case [1], this is how the flow is in a nutshell:
>
> * The user is presented with a <button> (say "Cast", "Airplay", or
> whatever makes sense to the user given the context). The click of the
> button invokes getScreen().
>
> * The UA shows a user interface to the user for picking the device. It may
> take a while for the devices to pop up in the user interface, as the
> discovery process may be still in progress. The user picks a single device.
>
> (* If the user has chosen a "favorite device" to be used for the given
> site before from the UI, should the UA be able to skip the above step?
> Think "ask forgiveness" approach [4] employed by the Fullscreen API. If
> there are changes in the list of available devices, the UA would prompt the
> user as usual to address Mark's concern.)
>
> * A "selected" event is fired.
>
> * The site can now start to show content on the selected screen.
>

Hmm, I had imagined a slightly different flow:
* a site has content that it can render on a second screen. It calls
getScreen().
* the UA begins the discovery process
* when the first device is discovered, the UA shows the Cast etc. icon
* later, the user decides they wish to send content to a second screen.
They click the UA Cast etc. button and the UA shows the drop-down list.
* The user selects a device, implicitly giving their permission for the
site to send content to that device
* the site received the "device selected" even and begins sending content
to the device

It seems that if we want a consistent user experience and especially if the
site should not be informed about the existence of devices before selection
than the UA, not the site, should render the Cast etc. button. The button
should only be rendered to the user if both the site supports second screen
*and* there is a second screen available (at least this is the existing UX
with Cast / AirPlay etc.).


>
> > I hope this makes some sense and could be factored into the Presentation
> API work. I looking forward to helping out however we can.
>
> Great feedback! Feel free to document any of the open issues to the wiki
> [1], add further details to the use cases if needed. We can continue use
> the wiki for collaboration to support the mailing list discussion if deemed
> useful by the participants.
>

Ok, I will make comments in the wiki too.

...Mark



>
> Thanks,
>
> -Anssi
>
> [1]
> https://www.w3.org/community/webscreens/wiki/API_Discussion#New:_Media_Flinging_to_Multiple_Screens
> [2]
> http://lists.w3.org/Archives/Public/public-webscreens/2014Feb/0021.html
> [3]
> http://lists.w3.org/Archives/Public/public-webscreens/2014Feb/0017.html
> [4]
> http://blog.pearce.org.nz/2013/12/why-does-html-fullscreen-api-ask-for.html
>
>

Received on Wednesday, 12 February 2014 17:31:14 UTC