Re: Extend fullscreen API to support second screen

On 2014-08-25 17:08, 段垚 wrote:
>
> 于 2014/8/25 18:49, Francois Daoust 写道:
[...]
>> While the approach taken by the Presentation API is more complex for the content mirroring case, it enables more use cases without preventing content mirroring.
> I think (partial) content mirroring can cover the use cases of current Presentation API: just make a iframe fullscreened on the second screen, then the requesting page can communicate with the page in iframe just like current Presentation API.
> If mirroring is not needed at all, just hide the "::mirror" pseudo-element.

But then the main difference I see between this approach and the 
approach currently undertaken is the need for the requesting user agent 
to create a browsing context for the iframe in all cases in your 
proposal. Am I missing something?

With the Presentation API, in the 2UA case, the requesting user agent 
does not need to render the presenting page at all. In the 1UA case, it 
also does not need to render the presenting page twice (one time for the 
second screen and the one time for display on the requesting page).

>
> WebRTC and screen capturing is pixel-oriented, not vector based, so if the resolutions of the captured area and the second screen don't match, the result is blurred.

True. Conversely, note that the local device may not be able to prepare 
the content to be displayed on the second screen at a high enough 
resolution (e.g. memory or CPU limitations on the local device, not 
enough bandwidth to pass the data onto the screen), so the presenting 
page may look blurred on the second screen in any case.

>>
>> Content mirroring is very hard to achieve in the case when the second screen is a remote screen controlled by a user agent that is different from the one running the requesting page (discussions in this mailing-list refer to this case as the 2UA case). The other use cases are easier to enable in that case.
> This may be achieved by:
> (1) local UA renders the element to be mirrored on a hidden surface which has the same size as the remote screen;
> (2) local UA transmits the rendered images to the remote UA (by WebRTC or something), the latter display these images on the remote screen;
> (3) local UA also display these images on the original area of that element;
> (4) remote UA sends user inputs back to local UA;
> (5) local UA processes user input from both remote and local screens.

That matches my definition of "very hard" but browser vendors may have 
different views! ;) It requires the local UA to always go through 
gymnastics that are not needed in the 2UA case. Even in the 1UA case, 
step (3) may not be that simple to achieve when screens do not have the 
same resolution, as you noted.

The Presentation API leaves it up to the controlling and presenting page 
to do (3), (4) and (5), and does not require the controlling user agent 
to do (1) and (2) in the 2UA case.

A future version of the API could indeed expose API primitives to make 
the last few steps (content mirroring in particular) easy to do. It 
would be great to experiment with prototypes here.


> Of cause, if the local device not powerful enough or the network is not fast enough, this approach is not feasible, and current Presentation API is more suitable.

In many cases, the local device is going to be a constrained device such 
as a mobile phone or a tablet while the remote device will be a larger 
display. Isn't it preferable to let the remote device handle the parsing 
and rendering of the page to present on its own if that's possible, as 
that does not affect the local battery, CPU and memory usage?


[...]
>>> I think "extending fullscreen API to the second screen" can make this
>>> task trival. The API is like this:
>>>
>>> partial interface Element {
>>>     void requestFullscreen(optional short screenId);
>>> };
>>
>> Note that, for privacy reasons, APIs are unlikely to expose the exact user screens configuration, so the requesting page will likely not know whether there is a second screen with ID 1, 2, 3, etc.
> Then how about add an method like requestSecondScreen()? UA can raise a dialog and let user choose a screen if there are more than one.

Sure and that is the direction that the Presentation API is taking.

Thanks,
Francois.

Received on Tuesday, 26 August 2014 13:28:32 UTC