Re: Extend fullscreen API to support second screen

 2014/8/25 18:49, Francois Daoust д:
> Hi Duan Yao,
>
> On 2014-08-22 10:01, Έ wrote:
>> Hi,
>>
>> I'm new to the list. I'm curious about the possibility of extending
>> fullscreen API to the second screen. 
>
> Welcome to the list!
>>
>> In a paper from Intel on second screen for web
>> (http://www.w3.org/2013/10/tv-workshop/papers/webtv4_submission_19.pdf),
>> fullscreen API is metioned:
>>> The Fullscreen API is available but does not have a notion of 
>> controlling which screen the
>>> fullscreen content should be shown on. 
>>
>> However the current spec doesn't have words on fullscreen API. Was
>> "extending fullscreen API to the second screen" approch considered but
>> dropped? Where can I find the disscusions? 
>
> I let Intel folks comment on the meaning of that sentence in the paper but I guess "extending" can be interpreted in different ways. The Fullscreen API is certainly one of the bases of the Presentation API, so all the discussions here are about extending it somehow, but the Presentation API is also looking at enabling use cases that cannot be covered by the Fullscreen API. It is probably more correct to see the Presentation API as an extension of "window.open".
>
> In particular, one the main use cases under consideration is the possibility to open some Web page that is different from the requesting page on the second screen. A direct extension of the Fullscreen API would not be possible to do this, because the API needed looks very different:
> - it must take the URL of the Web content to render on the second screen.
> - it must expose a communication channel between the requesting page and the page opened on the second screen, so that the requesting page can control the page opened on the second screen. 
>
>> Current spec looks promising to me, but I think it has some limitaions
>> if one want to mirror a portion of a page (or a portion of it) to the
>> second screen. Patial mirroring is quite useful for presentation, e.g.
>> on the first screen (laptop or pad) show the slides and the memos, and
>> on the second screen (projector) only show the fullscreened slides, and
>> keep the two in sync. With current spec, one may load the slides in both
>> local and presentation browsing contexts, and capture-send-reply user
>> inputs in one or two directions. However, this is error-prone and not
>> always feasible. E.g. what if the slides is playing an animation driving
>> by a random number generator? Unless the animation code itself is
>> presentation-API-aware, the two screens can't keep sync with each other. 
>
> On top of re-playing all user actions, "getUserMedia" and WebRTC could also be used to capture and stream the contents of the first screen into the second, provided the user agent that controls the second screen actually supports them.
>
> As said, content mirroring is not the main use case that the Presentation API is trying to address, and the first version of the Presentation API will actually leave content mirroring out of scope.
>
> While the approach taken by the Presentation API is more complex for the content mirroring case, it enables more use cases without preventing content mirroring. 
I think (partial) content mirroring can cover the use cases of current Presentation API: just make a iframe fullscreened on the second screen, then the requesting page can communicate with the page in iframe just like current Presentation API.
If mirroring is not needed at all, just hide the "::mirror" pseudo-element.

WebRTC and screen capturing is pixel-oriented, not vector based, so if the resolutions of the captured area and the second screen don't match, the result is blurred.
>
> Content mirroring is very hard to achieve in the case when the second screen is a remote screen controlled by a user agent that is different from the one running the requesting page (discussions in this mailing-list refer to this case as the 2UA case). The other use cases are easier to enable in that case. 
This may be achieved by:
(1) local UA renders the element to be mirrored on a hidden surface which has the same size as the remote screen;
(2) local UA transmits the rendered images to the remote UA (by WebRTC or something), the latter display these images on the remote screen;
(3) local UA also display these images on the original area of that element;
(4) remote UA sends user inputs back to local UA;
(5) local UA processes user input from both remote and local screens.

Of cause, if the local device not powerful enough or the network is not fast enough, this approach is not feasible, and current Presentation API is more suitable.

>
> All that does not mean that extending the Fullscreen API to support (partial) content mirroring is not a good idea. The Presentation API is still at early stages!
>
>>
>> I think "extending fullscreen API to the second screen" can make this
>> task trival. The API is like this:
>>
>> partial interface Element {
>>    void requestFullscreen(optional short screenId);
>> }; 
>
> Note that, for privacy reasons, APIs are unlikely to expose the exact user screens configuration, so the requesting page will likely not know whether there is a second screen with ID 1, 2, 3, etc.
Then how about add an method like requestSecondScreen()? UA can raise a dialog and let user choose a screen if there are more than one.
>>
>> The first screen has screenId 0, and other screens has Ids greater. In
>> most cases, elem.requestFullscreen(1) would cast the element to the
>> second screen, and fullscreened. Once an element is fullscreened on the
>> second screen, its live image on the second screen is captured and
>> displayed on its original area on the first screen (need scaling to
>> fit); other portions of the page are displayed and functioned as normal.
>> UA can also redirect user inputs on the original area on the first
>> screen to the second screen. Thus, patial mirroring is accomplished, and
>> the client code is very simple. 
>
> Would "partial mirroring" be that easy to accomplish? Wouldn't your suggestion create a blackbox area within the original page (similar to the kind of blackboxes that video plug-ins create)?
> What if the requesting page applies some CSS filter or move some element on top of that area while the element is being fullscreened? 
>
> Essentially, in the Fullscreen API, the user agent renders the fullscreened element in its own stacking layer on top of the rest, while you seem to be creating a second set of rendering layers for the same browsing context, and something weird to be rendered in the first set of layers. 

In my mind the model of "fullscreen to the second screen" is very similar to that of normal fullscreen. The fullscreened element (elem) is removed from the normal flow, and is placed in a top layer assosiated with the second screen. UA also create a pseudo-element elem::mirror, which takes the original place of elem, and works like a <video> element that is playing the live video of elem. So no other elements can be on top of elem, although may be on top of elem::mirror. CSS is applied to elem according to current fullscreen spec. Authors can also style elem::mirror. elem::mirror should be a sibling of elem, not a child, so that elem's style doesn't affect it.

"Fullscreen to the second screen" should be easy to use, however may not be very easy for UA to implement. For single UA case, I think UA can draw the fullscreened element twice, once on the surface of the top layer, once on the surface of elem::mirror, just with different zooming. 2-UA case has been described above.

Regards,
Duan Yao

Received on Monday, 25 August 2014 15:08:46 UTC