- From: 段垚 <duanyao@ustc.edu>
- Date: Tue, 26 Aug 2014 23:49:04 +0800
- To: Francois Daoust <fd@w3.org>, "public-webscreens@w3.org" <public-webscreens@w3.org>
于 2014/8/26 21:28, Francois Daoust 写道: > On 2014-08-25 17:08, 段垚 wrote: >> >> 于 2014/8/25 18:49, Francois Daoust 写道: > [...] >>> While the approach taken by the Presentation API is more complex for >>> the content mirroring case, it enables more use cases without >>> preventing content mirroring. >> I think (partial) content mirroring can cover the use cases of >> current Presentation API: just make a iframe fullscreened on the >> second screen, then the requesting page can communicate with the page >> in iframe just like current Presentation API. >> If mirroring is not needed at all, just hide the "::mirror" >> pseudo-element. > > But then the main difference I see between this approach and the > approach currently undertaken is the need for the requesting user > agent to create a browsing context for the iframe in all cases in your > proposal. Am I missing something? Yes, if you want to present a completely different page on the second screen, an iframe is needed. However, if authors can do partial mirroring, in many cases they won't bother to have seperated controlling and presenting pages. After all, manipulating DOM in the same page is much more straightforward than handling async web messages. > > With the Presentation API, in the 2UA case, the requesting user agent > does not need to render the presenting page at all. In the 1UA case, > it also does not need to render the presenting page twice (one time > for the second screen and the one time for display on the requesting > page). > If mirroring is not needed at all, just hide the "::mirror" pseudo-element by CSS, then the UA should be smart enough to not draw the presenting element twice. >> >> WebRTC and screen capturing is pixel-oriented, not vector based, so >> if the resolutions of the captured area and the second screen don't >> match, the result is blurred. > > True. Conversely, note that the local device may not be able to > prepare the content to be displayed on the second screen at a high > enough resolution (e.g. memory or CPU limitations on the local device, > not enough bandwidth to pass the data onto the screen), so the > presenting page may look blurred on the second screen in any case. > >>> >>> Content mirroring is very hard to achieve in the case when the >>> second screen is a remote screen controlled by a user agent that is >>> different from the one running the requesting page (discussions in >>> this mailing-list refer to this case as the 2UA case). The other use >>> cases are easier to enable in that case. >> This may be achieved by: >> (1) local UA renders the element to be mirrored on a hidden surface >> which has the same size as the remote screen; >> (2) local UA transmits the rendered images to the remote UA (by >> WebRTC or something), the latter display these images on the remote >> screen; >> (3) local UA also display these images on the original area of that >> element; >> (4) remote UA sends user inputs back to local UA; >> (5) local UA processes user input from both remote and local screens. > > That matches my definition of "very hard" but browser vendors may have > different views! ;) It requires the local UA to always go through > gymnastics that are not needed in the 2UA case. Even in the 1UA case, > step (3) may not be that simple to achieve when screens do not have > the same resolution, as you noted. Ah, step (3) was actually not optimized. Local UA don't have to reuse these images, it can directly draw the fullscreened element on its "::mirror", same as 1-UA case. Currently UAs can handle zooming and CSS transform pretty well, so I guess blurring is not an issue here. Maybe mirroring for 2-UA is not trival to implement, but I think the fundemental technologies have been established for a long time, like X Window, RDP, VNC, and so on. > > The Presentation API leaves it up to the controlling and presenting > page to do (3), (4) and (5), and does not require the controlling user > agent to do (1) and (2) in the 2UA case. > > A future version of the API could indeed expose API primitives to make > the last few steps (content mirroring in particular) easy to do. It > would be great to experiment with prototypes here. > Yes, I think current Presentation API is much easier for UA to implement, so let it be -- I don't mind to have both APIs. > >> Of cause, if the local device not powerful enough or the network is >> not fast enough, this approach is not feasible, and current >> Presentation API is more suitable. > > In many cases, the local device is going to be a constrained device > such as a mobile phone or a tablet while the remote device will be a > larger display. Isn't it preferable to let the remote device handle > the parsing and rendering of the page to present on its own if that's > possible, as that does not affect the local battery, CPU and memory > usage? I'm more optimistic because: - CPU/GPU/memory follow Moore' law, but display's resolution don't. - Mobile devices often have similar or even higher resolutions than large displays (except 4K TVs). - Mobile devices can handle large web pages pretty well these days, and presentations are usually small pages. - Presentations are usually short in time (no more than tens of minutes), so battery life is not a big deal. 2-UA case of current Presentation API seems complicated because of privacy considerations, however this is not a problem in partial mirroing because pages is always local.
Received on Tuesday, 26 August 2014 15:49:50 UTC