8/6 meeting summary and proposal

Hey folks,
As promised, I’ve attempted to capture a summary of the concerns and opinions voiced about this contentious topic during the call Tuesday August 6th as well as in several GitHub issues.  Also as promised, at the bottom of this mail I’ve written up the details of a proposal for how we can move forward.  This will be the primary topic of discussion during next Tuesday’s call on August 13th.  Given the contentious nature of this discussion, I’d like to remind everyone of the code of conduct and to generally be respectful of each other and the work which has already gone into getting us to this point.

Summary

Q: If ‘immersive-ar’ is moved to a module, does that mean user agents which supports it will not be spec compliant?
A: The short answer is, no, a UA is not generally considered to be considered ‘non-compliant’ for having additional features.  The longer answer is that none of WebXR has reached CR (and most certainly not Rec), which means no WebXR features are “done”.  Part of the process of getting to “done” is multiple engine implementations, and during that time user agents will be aligned to various levels of completion.  The key thing is that end users and developers be able to understand when specific features have reached stability, and be ready to respond to breaking changes until then.

Q: Is there value in an ‘immersive-ar’ mode that only exposes an ‘unbounded’ reference space without device-aligned camera access or any additional real-world related features (hit-testing, anchors, RWU, lighting estimation, etc)?
A: I think we’re all eager to enable more complete AR experiences, but the specific question here is whether or not “immersive-ar” has intrinsic value on its own.  One example of such an experience, would be a virtual object floating in space that the user can walk around and change properties of (ex: a product configurator).  I looked through the notes from Tuesday’s call and the conversations on GitHub, but I’m not actually clear if there is consensus on this topic.  If I’ve read it right, a number of companies believe it is valuable, including Microsoft, Mozilla, Magic Leap, and Amazon.  Are there any companies dissenting?

Q: Should ‘immersive-vr’ support be required on additive light devices?  Should it be prohibited?
A: In general, the Immersive Web Working Group has taken the approach that, where possible, each user agent should make their own determination about what they believe to be the best experience for their customers.  In the case of the various immersive modes, the spec was designed so developers query xr.supportsSession() to predictably build their site content, but it’s up the user agent which session types they will support.  Based on my reading of the notes/issues, I believe there to be consensus around continuing to leave this choice to each UA.

Q: Should the ‘additive’ environment blend mode ever be reported in ‘immersive-vr’?
A: Several companies expressed concern that if an ‘immersive-vr` session always reported an ‘opaque’ blend mode (or perhaps was renamed ‘none’), their customers would end up with unpleasant experiences on additive light devices.  For example,  skyboxes and/or unexpected behavior where rendering black.  In addition, a concern was raised that, if an ‘additive’ blend mode could be reported in ‘immersive-vr’ sessions, developers might attempt to create a session and check the blend mode for ‘additive’, thereby creating fake AR sessions that would fail to run correctly on VR hardware.  Both of these concerns boil down to a risk of diluting the meaning of ‘immerisve-vr’ and creating unpredictability for end users.  On the one hand, reporting ‘opaque’ in a VR session on an additive device would be a lie.  On the other, reporting an accurate value would entice bad behavior from developers.  This topic does not yet have any clear-cut solution, though a number of folks have ideas for approaches to investigate.  This topic must be addressed ASAP as it may cause breaking changes to the spec as written today.

Q: Do developers need to be able to differentiate when to create screen-space UIs vs. world-space UIs?
A: Currently there is no mechanism for developers to decide where to draw their user interface elements, and concerns have been raised about the impact this could have on the usability of ‘immersive-ar’ experiences on different hardware form factors.  Specifically, if an experience is authored for a hand-held AR device (e.g. AR Core), UI elements are likely to be drawn in screen-space on top of all 3D rendered content.  When the same experience is run on a head-worn device, the UI elements would potentially be drawn uncomfortably on the users eyes.  Conversely, content authored for a head-worn device is likely to have world-space UI elements which may be difficult for a user to tap effectively on a hand-held device.  Looking through the notes from Tuesday’s call, it appears that at least Microsoft, Mozilla, Google, and Amazon are significantly concerned about moving forward with standardizing an ‘immersive-ar’ without this addressed in the specification.  Given this overwhelming concern, it seems urgent to resolve so that `immersive-ar` will be a good experience across all hardware.

Q: Do hand-held AR devices require the ability to place DOM content on top of the session?
A: It is common for AR experiences on hand-held devices to place UI elements on top of the AR content.  Developers would strongly prefer to use HTML/CSS for these UIs, however, while not ideal, screen-space user interfaces can be created via other means.  Given the complexity of supporting DOM-based UI, we must evaluate the implications on deliverable timelines should we take a dependency on resolving the issue.

Q: Should developers be able to request restricting sessions based on blend mode or UI placement support?
A: For example, if a developer has written an experience that they only expect to work on a hand-held device, should an ‘immersive-ar’ session be created only for them to immediately terminate the session because the experience will not work correctly? This specific question didn’t come up on the call, but it may need addressing depending on the answers to the previous two questions.

Q: What are the outstanding privacy concerns around ‘immersive-ar’ mode?
A:  Many of the threats initially outlined in the privacy and security repo have either been addressed in the core WebXR spec or are active issues there.  There are a number of threats more directly related to real-world understanding that still require further investigation.  There is one outstanding threat vector that is not covered by one of those two categories but which is introduced by the inclusion of ‘immersive-ar’.  This threat, the perception of camera access, is outlined here: https://github.com/immersive-web/privacy-and-security/blob/master/EXPLAINER.md#perception-of-camera-access.

Q: How long is this going to take?
A: I’m not sure.  But the same question can be asked about the remaining work to finish all VR and common functionality.  We haven’t reached CR yet and have only just now requested the first wide review of WebXR.  From what I can tell, we’re all dedicated to getting it done and doing so as quickly as possible.

Proposal
Based on everything outlined above, I propose the following and invite discussion on it during Tuesday’s call:

  *   User Agents continue implementing and iterating on both the core WebXR features and AR features
  *   The “core” of AR remains under the scope of the Working Group.  Additional incubations (ex.: real world understanding, lighting, etc) are under the scope of the Community Group.
  *   Barring any new information, the “core” of AR will include:
     *   A way to request an AR session (aka. ‘immersive-ar’)
     *   The ability for developers to differentiate between ‘additive’ and ‘alpha’ environment blend modes (and a decision on what to do about environment blend mode as currently specified for VR)
     *   The ability to differentiate whether UI elements have the option of being placed in ‘screen-space’ or must be placed in ‘world-space’
     *   Possibly allowing developers to restrict the creation of sessions based on the properties defined above
     *   Mitigations for all associated privacy and security issues introduced by these features
  *   We will continue to drive for concrete answers to the outstanding issues relating to the “core” of AR as quickly as possible.  These answers must be agreed upon and written down in a specification.  Doing so is Brandon’s and my top priority alongside resolving remaining issues in immersive-web/webxr.
  *   Work on the “core” of AR will continue in a separate AR repo and module.  This approach is aligned with the logistics already planned for the WebXR Gamepads Module, and it also follows a similar approach taken by the OpenXR group.
  *   In the best case, if we are able to agree upon and define a concrete “core” of AR quickly enough, we can submit it for CR at the same time as the main WebXR Device API.  In the worst case, the AR module will hopefully only trail the core spec to CR by a few months.
  *   All “non-core” AR features will incubated in different repo(s), including anchors or any support for a DOM overlay.  As mentioned above, this work will be under the scope of the CG. When these features are stable enough, they will likely be moved to the WG and a CR requested when appropriate.  If this so happens to be at the same time as the “core” of AR, we can discuss if they should be included in the “core” of AR or should be submitted as a separate module.

See you all on Tuesday.
Nell

Received on Friday, 9 August 2019 16:41:12 UTC