Re: WebXR Device API call agenda, Apr 17th 2018

Here are the notes from today's call. We moved pretty quickly and I didn't
always get everything, so feel free to reply with corrections and
clarifications!


Brandon:
Jumping into the agenda. Sorry for sending it a bit late.
We have several current bugs to talk about, so it'll take up our time, I
think.
First item is the frame of reference / coordinate system conversation from
a couple of weeks ago. This is the discussion of merging anchors as a core
concept in the API but people were worried that the name "anchor" would
impmly things we don't want.
So I put up a PR that merges the XRFrameOfReference and XRCoordinateSystem,
just using XRCoordinateSystem so that when we add anchors we can call them
"anchors" but the API returns XRCoordinateSystems.
This seems well received, but there are concerns that in doing that we're
moving stage boundary info off of the frame of reference and onto the
session.
Alex Turner (from MS) had come comments on that.

Alex Turner:
The genreal idea was that we're starting to get stages that both tell you a
pose and metadata, and stage is the tip of the iceberg. So, if it's the
only one then it's ok, but if there are more examples then maybe there's a
defined way to access it other than putting it on the session.
If you happen to know that you asked for a stage, then you'd get back an
XRStage that is a type of XRCoordinateSystem with extra information. Then
XRCoordinateSystem could be the base for things like anchors and the stage.

Brandon:
I feel like that flipflops on what we had. The old plan was to build
Anchors on top of XRCoordinateSystem, but these changes were to reduce the
number of variants. I don't necessarily have anything against it, but then
we should probably leave things as they are.
Rather than have everything return a coordinate system, it feels like we
should return the actual type (FoR, Anchor, etc) that derives from
XRCoordinateSystem.

Alex:
It feels like we're in middle ground. Right now you create a FoR that might
have bounds, so it's sort of conditional. If we have XRStage then we could
have .createStage that would return that type, making it more complete.
Right now FoR doesn't promise anything, but you don't know what kind it is.
If we do ray cast for anchors, one might be a plane one might be a point,
so we might still be in the case where we need to ask what type it is.

Blair:
We made our system have one common thing, a pose, and then we sub-type off
of it. I thought that we'd go down to one thing, XRCoordinateSystem, and
then sub-type it from there. One place where I could see differentiating,
like maybe a coordinate system that doesn't change, then it would be
different than something that we need to query every frame. I like having
the stage be a sub-type of whatever this central thing is with additional
attributes.

Bradon:
I agree with both sides to a certain degree. It's helpful to simplify the
API, but I do agree with the idea that we should allow for some
specialization. I don't know how well that meshed with the AR side, where
we want everyone to get used to working with Anchors from the beginning.

Blair:
I don't see this as conflicting. We just want people to stop using global
coordinate systems, putting everything at 0,0,0. If the API says there's
all sorts of ways to get XRCoordinateSystems that have various properties,
then that what we're saying that at the highest level people aren't just
using static 3D coordinates. I don't care about "anchors" per-se, but there
will be variations of XRCoordinateSystems across the system.

Alex T:
I agree, the leap is getting away from one true coordinate system. And
there's another jump, we could give people very used to using anchors but
it's going to be tough because it's a mental model cost to getting used to
them moving. That's what the "room" coordinate system is for, to make it
easy to stick it in the one coordinate system. So, we could have the
XRCoordinateSystem and then more and more sub-types that they want to use.

Blair:
If the stage bounds are associated with a sub-type of XRCoordinateSystem
then they have to make the leap from XRCoordinateSystem to stage and then
use it as dynamic data.

Brandon:
In Google's implementation, the coordinate systems (including stage) do
move as it refines its idea of where it is in the room. I'm in the middle
of fixing a bug where we just move the coordinate system. So, it's very
anchor-like. These changes should be about messaging about how users use
these concepts. So, as long as we don't start off with 0,0,0 and then down
the road talk about anchors then we'll achieve the goal of any individual
API decisions.

Alex:
Making the coordinate systems are explicit, then if the app makes a stage
and only uses the stage, then even if under the cover is a raw concept then
the app can still do all of the map relative to that.

Kip:
Now that I'm working in AR, the stage bounds could be where I can move
around right now, so in another way it's like a request for an anchor.

Brandon:
I'm going to abandon the PR I put in there. I don't feel like I have a
grand idea on how to evolve that to match the concepts. If anybody like
Blair, Kip, or Alex would like to put up a PR or concrete proposal in an
Issue, then I'm more than happy to work through that. I think the more
appropriate approach is a document one, so until there's a more specific
proposal I'm going to work on the documentation and examples.

Alex:
I think the change will come when we have more than just the base
coordinate system and the stage, but when we have more attributes for AR.
In the anchors sub-repo I commented on Issue 4, in prose played out how the
current is-a compares to a has-a pattern. So, people could comment on that.

Brandon:
So, there is another PR from Lewis about renaming pointer origin and enum
values. I haven't had a chance to look through this in depth, but if
there's anything you'd like to discuss then I'm happy to facilitate that
right now.

Lewis:
I just put it up yesterday. There are a couple of issues in this PR. So, I
would hoping that this would be a stick to prod those conversations. How do
we want to deal with rendering controller models and session creation
options. Do we want events, callbacks? The best place to discuss is
probably the PR itself.
The reason I opened this PR is we were talking about the Hololens prototype
and it felt strange to exposing a physical hand as a ??? so we wanted to
rename the enum for what they're used for instead of what they are. So that
means there will be more than one device, and that's where the changes come
from. And then there were additional renaming items that happened. So,
let's have that conversation on the PR itself.

Brandon:
Thanks, I'll take some time to look through the PR.
There was an issue from Trevor Baron, I think from Microsoft, about trying
to avoid additional canvases when mirroring or using magic window. I think
this is a worthwhile discussion, though I take a different direction. What
Trevor's getting to is use the WebGL canvas and use that as the mirroring
output.
I find it problematic, there is a reason why we went through this and ended
up where we are. It is a little awkward when using a context like WebGL and
say "In this circumstance we're going to override that behavior."
I feel like a lot of the problems with WebVR came from when we were
overriding how WebGL usually works. So, I don't see the benefit of this
proposal.
But, if you have a magic window and spin up an exclusive session then you
need two canvases and contexts. That is a little bit of a pain, and it sort
of feels like we should be able to use it for mirroring and for
presentation. It's a nice idea in concept, but runs into issues in practice
because it implies we can share non-exclusive and exclusive context but not
between multiple non-exclusive contexts. So, we'd have to describe that
behavior and it doesn't feel like it's worth the effort.
Does anyone else feel strongly about it?

Rafael (MSFT):
I agree with your assessment. The default framebuffer of your canvas no
longer belongs to you and mirroring should be separate.

Kip:
I tend to agree. The slight inconvenience would be overruled by the
optimized path for the API.

Brandon:
Ok, I'll add remarks to the Issue.
Please tell Trevor I appreciate the Issue but it sounds like we won't
change it.
Next on the agenda, Anna put together a sample test for the WebXR test API
repo, and pushed it just yesterday.

Anna:
It's up so that we can discuss more how the API can be used.

Trevor F. Smith:
I'll poke some people at Mozilla to weigh in.

Brandon:
Deferred session creation. This is one of the last big outstanding issues
before we're back to feature parity with WebVR 1.1 and how we're going to
do system integrations. It would be nice if we could make progress on this.
Oh, Blair made a comment that I missed. I'll look over that. I want to
bring up that there were concepts (missed the concept - TFS) that didn't
sit well with Microsoft. I want to push back on that a bit more. It feels
like it's a valuable concept and would like to get it in there if we can
communicate that it won't be appropriate in all cases. I don't feel like
the concern that the signal blocks out stand-alone headsets.

David:
I would hope that this would be less of a concern now that people are
shipping VR browsers.

Brandon:
This is a point that Mozilla has brought up several times, that they want
an in user agent button to go into there. But, as it is now is a behavior
that devs have control over.

Blair:
I'm confused.

Brandon:
I should clarify. I'm conflating navigation and session initiation as the
same issue. I call it deferred session request. So this PR (256) started as
activation focused and MS brought up concerns so now it's navigation
focused with its own issues. So, I'm trying to move forward just the
activation portion, which is no longer in the PR.

Blair:
There have been other discussions in other places and it still feels like a
reasonable way to deal with all of this then we could make a leaveSession
method with no promise and add an event that is fired when the session
starts, regardless who starts the session (UA, script, navigation, ...). I
would like to see us moving away from those ugly little buttons in every
content as the only way to enter WebXR content.

Brandon:
Concerns about event based. Part is from Daydream team. For the UA button,
if you have an event and you are relying on the dev to trigger a normal
session from within that event...

Blair:
No, I'm imagining that the session would be handed to them via the event
callback.

Brandon:
So, you do need to call some session method to setup the session options.

Blair:
I don't think so. My sense would be that if there's UI elsewhere that lets
the user enter XR (whatever) then that action implicitly provides those
parameter.
In magic window on my phone and I navigate, then the only reasonable thing
is to hand the new page the exact same type of session. The page should
have a way of saying no.

Brandon:
You're saying that you view popping up a modal saying "no" is less
offensive than having a button that says "enter VR".

Blair:
Yes, because there's not that many pages that won't handle the various
options.
I'd like to move away from devs saying that they only support 6dof VR and
instead moving back to the days when devs were expected to handle it when
they can.
And most people will be writing in higher level tools, anyway.

David:
It's ad admirable goal, but I'm not sure we can take away control from the
dev. We should explore what that means as there are more and more
combinations.

Blair:
A dev might specify that the app is willing to handle different session
types and then the UA would handle offering up the correct options.

Lewis:
I have a PR for that.

David:
We do need to consider the multiple ways to create sessions, but it feels
like we're conflating them. Maybe we can say it's experimental and we can
move forward.

Rafael:
In defense of the button, in many platforms they have the button and
putting on the headset doesn't mean you want to load the web page. I hope
we don't take away the ability to start VR from the web page and make the
user figure out how to start it from the UA.

David:
There's not really browser UI for doing anything in a web page other than
navigation.

Kip:
Before the browser is launched, you might be in the headset. You have three
windows showing sketchpad and opening a page doesn't necessarily open it in
VR. The OS might offer something like a maximize button that signals the
page to go into XR. Explicit advertising of capabilities solves problems
like scraping.
It feels awkward to have a promise open for a long time, maybe resolved
multiple times.
I see value in the deferred session concept.

Blair:
One of the thing about buttons in the content (instead of the UA) it means
that the devs have to create buttons for every variation instead of the UA
handling that. Could it be like videos where the DOM element can provide a
scrub UI?
So, as a web page dev you could ask for a UA provided UI for starting a
session?

Brandon:
I think about for other reasons (accessibility, etc) there could be reasons
for the UA UI to offer controls for things like magic window mode. If
you're already having UA provided UI for that then offering a button to go
into a headset, but that's something that's not too difficult for JS to
provide. So, I tend to lean letting the JS ecosystem pick up the slack and
then figuring out the right patterns.

John (Google):
I can see many contexts where the content determines the session type. Yes,
we want devs to support more modes, but the content will determine it.

Brandon:
Ok, we're out of time. Thank you for the discussion! Next week is the
normal AR call. I won't be able to make it.

*talk about canceling the AR call next week due to conflicts*

Blair:
Brandon, could you consolidate the different issues.

David:
I have a partially written explainer which I'll try to finish in the coming
week.

Brandon:
Thanks, everyone!


On Tue, Apr 17, 2018 at 10:17 AM, Brandon Jones <bajones@google.com> wrote:

> Apologies for the late agenda.
>
> *Call Agenda Items:*
>
>    - Convert XRFrameOfReference to XRCoordinateSystem (#340
>    <https://github.com/immersive-web/webxr/pull/340>)
>    - Renamed pointerOrigin and enum values. Updated ray to use vectors (
>    #342 <https://github.com/immersive-web/webxr/pull/342>)
>    - Related to: Representation of a rays in WebXR (#339
>       <https://github.com/immersive-web/webxr/pull/339>)
>    - Trying to avoid additional canvas when mirroring (#341
>    <https://github.com/immersive-web/webxr/issues/341>)
>    - Sample test
>    <https://github.com/immersive-web/webxr-test-api/blob/master/sample_test.html>
>    in the webxr-test-api repo
>    - Deferred Session creation (#256
>    <https://github.com/immersive-web/webxr/pull/256>)
>
> Please reply with any additional items you'd like to see addressed.
>
> *Call date:* Tuesday April 17th (and every other Tuesday thereafter)
> *Call time:* 1:00 PM PST for one hour
>
> WebEx call details are posted to the internal-webvr@w3.org mailing list,
> accessible to any community group member.
>
> --Brandon
>



-- 
Trevor F. Smith
Reality Research Engineer
Emerging Technologies @ Mozilla
trsmith@mozilla.com

Received on Tuesday, 17 April 2018 21:06:45 UTC