W3C home > Mailing lists > Public > public-secondscreen@w3.org > October 2017

Re: [presentation-api] Consider use cases for Presentation API v2 with VR capable displays

From: Brandon Jones via GitHub <sysbot+gh@w3.org>
Date: Tue, 31 Oct 2017 23:06:58 +0000
To: public-secondscreen@w3.org
Message-ID: <issue_comment.created-340933177-1509491217-sysbot+gh@w3.org>
We talked about this in the WebVR group's call today. Wanted to communicate a couple of key points that came up:

Early in the lifetime of the WebVR API using the Presentation API as the primary mechanism for presenting content to a headset was discussed but ultimately dropped. The biggest motivator for that was that we had seen that it was highly desirable for developers to load and display their 3D content while browsing normally and then transition to showing the same content in VR without incurring a reload. It appeared that using the Presentation API would effectively force content to be reloaded on the headset every time, given that the content to be shown is communicated via URL.

At the same time we have many VR devices that use the same physical display for both 2D browsing and VR content. (This mostly refers to [mobile devices paired with a viewing harness](https://vr.google.com/daydream/smartphonevr/) today, but will soon be true for a variety of [standalone devices](https://www.oculus.com/go/) as well.) The Presentation API felt poorly suited for managing mode switches of a single device rather than issuing navigation commands to an external one.

For both of those reasons we feel the Presentation API is not appropriate for launching VR content on most headsets. The other use case called out in the original post seem appealing, however! It's not entirely clear to the WebVR group what would need to be done to support them.

Certainly using the Presentation API from within VR to begin displaying content on an external screen would be nice. It seems to me, though, that such use should generally work with the API as specced today without change? (Browsers may have to ensure that appropriate native UI can be shown in a VR context, of course.) It should be noted that this would probably NOT be used for mirroring VR content to an external screen, as most existing devices have native, optimized mechanics to do so. Controlling, say, presentation of a slide deck externally from within VR would be sensible though. (Public speaking simulator?)

The flip side to that is probably the most appealing scenario: Assume that you have a desktop or laptop and, independently, a standalone VR headset (not connected to the PC.) If a user encounters interesting looking VR content while using the PC they could presumably use the Presentation API to  open the page they are looking at on the VR device rather than being forced to put on the headset and re-navigate to the same location with an in-VR browser. (Assuming I understand the spec correctly.) Additionally, it could be beneficial to instruct the page being opened that way that you'd like it to immediately start displaying the VR content rather than opening first to the 2D version of the page. This normally requires a user gesture, but the Presentation API could provide the appropriate permissions to bypass that requirement.

(Note that this would require the VR scenes resources to be re-loaded on the VR device as outlined previously, but in this scenario that couldn't be avoided no matter what the user did.)

So that's our initial thoughts on ways the two APIs could interoperate. It would be helpful to get feedback on if any of the above represents a fundamental misunderstanding of some aspect of the Presentation API.


GitHub Notification of comment by toji
Please view or discuss this issue at https://github.com/w3c/presentation-api/issues/444#issuecomment-340933177 using your GitHub account
Received on Tuesday, 31 October 2017 23:07:00 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 31 October 2017 23:07:01 UTC