- From: Francois Daoust <fd@w3.org>
- Date: Mon, 25 Aug 2014 12:49:58 +0200
- To: 段垚 <duanyao@ustc.edu>, "public-webscreens@w3.org" <public-webscreens@w3.org>
Hi Duan Yao, On 2014-08-22 10:01, 段垚 wrote: > Hi, > > I'm new to the list. I'm curious about the possibility of extending > fullscreen API to the second screen. Welcome to the list! > > In a paper from Intel on second screen for web > (http://www.w3.org/2013/10/tv-workshop/papers/webtv4_submission_19.pdf), > fullscreen API is metioned: > >> The Fullscreen API is available but does not have a notion of > controlling which screen the >> fullscreen content should be shown on. > > However the current spec doesn't have words on fullscreen API. Was > "extending fullscreen API to the second screen" approch considered but > dropped? Where can I find the disscusions? I let Intel folks comment on the meaning of that sentence in the paper but I guess "extending" can be interpreted in different ways. The Fullscreen API is certainly one of the bases of the Presentation API, so all the discussions here are about extending it somehow, but the Presentation API is also looking at enabling use cases that cannot be covered by the Fullscreen API. It is probably more correct to see the Presentation API as an extension of "window.open". In particular, one the main use cases under consideration is the possibility to open some Web page that is different from the requesting page on the second screen. A direct extension of the Fullscreen API would not be possible to do this, because the API needed looks very different: - it must take the URL of the Web content to render on the second screen. - it must expose a communication channel between the requesting page and the page opened on the second screen, so that the requesting page can control the page opened on the second screen. > Current spec looks promising to me, but I think it has some limitaions > if one want to mirror a portion of a page (or a portion of it) to the > second screen. Patial mirroring is quite useful for presentation, e.g. > on the first screen (laptop or pad) show the slides and the memos, and > on the second screen (projector) only show the fullscreened slides, and > keep the two in sync. With current spec, one may load the slides in both > local and presentation browsing contexts, and capture-send-reply user > inputs in one or two directions. However, this is error-prone and not > always feasible. E.g. what if the slides is playing an animation driving > by a random number generator? Unless the animation code itself is > presentation-API-aware, the two screens can't keep sync with each other. On top of re-playing all user actions, "getUserMedia" and WebRTC could also be used to capture and stream the contents of the first screen into the second, provided the user agent that controls the second screen actually supports them. As said, content mirroring is not the main use case that the Presentation API is trying to address, and the first version of the Presentation API will actually leave content mirroring out of scope. While the approach taken by the Presentation API is more complex for the content mirroring case, it enables more use cases without preventing content mirroring. Content mirroring is very hard to achieve in the case when the second screen is a remote screen controlled by a user agent that is different from the one running the requesting page (discussions in this mailing-list refer to this case as the 2UA case). The other use cases are easier to enable in that case. All that does not mean that extending the Fullscreen API to support (partial) content mirroring is not a good idea. The Presentation API is still at early stages! > > I think "extending fullscreen API to the second screen" can make this > task trival. The API is like this: > > partial interface Element { > void requestFullscreen(optional short screenId); > }; Note that, for privacy reasons, APIs are unlikely to expose the exact user screens configuration, so the requesting page will likely not know whether there is a second screen with ID 1, 2, 3, etc. > > The first screen has screenId 0, and other screens has Ids greater. In > most cases, elem.requestFullscreen(1) would cast the element to the > second screen, and fullscreened. Once an element is fullscreened on the > second screen, its live image on the second screen is captured and > displayed on its original area on the first screen (need scaling to > fit); other portions of the page are displayed and functioned as normal. > UA can also redirect user inputs on the original area on the first > screen to the second screen. Thus, patial mirroring is accomplished, and > the client code is very simple. Would "partial mirroring" be that easy to accomplish? Wouldn't your suggestion create a blackbox area within the original page (similar to the kind of blackboxes that video plug-ins create)? What if the requesting page applies some CSS filter or move some element on top of that area while the element is being fullscreened? Essentially, in the Fullscreen API, the user agent renders the fullscreened element in its own stacking layer on top of the rest, while you seem to be creating a second set of rendering layers for the same browsing context, and something weird to be rendered in the first set of layers. Regards, Francois. > > Can a page get fullscreened on multiple screens simutaneously? The > problem is, there is only one document.fullscreenElement. So I think the > anwser is no, unless a parent element is fullscreened on the first > screen, and then one of its children is fullscreened on the second screen. > > Another issue is, how is the "original area" of the element styled on > the first screen after it is fullscreened on the second screen? Maybe a > "::mirror" pseudo-element can be used to represent the "original area". > > Regards, > Duan Yao > > > > >
Received on Monday, 25 August 2014 10:50:28 UTC