Re: Notes from the WebXR Device API call, February 21st 2018

Formatted notes:

- Nell: We should get a proper way to do rotation on the note taking from
now on.

- Brandon: Someone reached out about the timing of the call. For some
people that would like to join the timing is difficult. There is no perfect
time to fit everyone. We have gone over this before but would like to raise
the issue again just in case there are any new proposals.

- Nell: We should talk about this in the mailing list because if there are
people that could not join because of the time difference they might be
the  most interested ones in this conversation.

- Brandon: That is a very good point. Will review it and might make
notification over the mailing list.

- Brandon: Worked hard to try to get a PR for input but failed :). Will try
to  push it EOD. (Update: Pull request is now up <
https://github.com/immersive-web/webxr/pull/325>) Working with a preemptive
version of the implementation of the proposal some things have come up.
First, differentiating between a 3DOF and a 6DOF device makes a lot of
sense (is useful). Added an emulated position in the pose. Maybe this makes
sense for the XR device too? This feels logical to me but recall that was
discussed in the past and cannot recall why it was dismissed.

- Nell: I recall that the question was what difference does it make? Is the
developer going to do something different because of this new attribute?
Remember the was a conversation to add a tracking lost event.

- Brandon: I agree that the developer might not use it. Even though in
theory it make sense to have the same attributes in input pose or head
pose, it is true that this info in the head pose might not be useful/used.

- Kip: I can think of one particular use case. Throwing an arrow and the
pose is lost when taking an arrow from your back.

- Nell: My comments were oriented to not needing this information/attribute
for the head pose, not for input.

- Brandon: The most consistent concern is the representation of the
controller (3d model). Don’t know how to include this without complicating
the current API proposal. Do not see a way to resolve this without either
providing possible fingerprinting mechanisms/info (id) or exposing a more
complex API to expose a model.

- Kip: Make a simple API to request if the UA is able to render the
controllers and then simple controls to show/hide them.

- Brandon: This approach was discussed in the past and the biggest issue is
to include the models realistically with the scene (depth buffer, lighting,
…).

- David: Showing a GLTF model is also a fingerprinting issue.

- Iker: Could the information be provided only when an exclusive session is
created/granted to lower the fingerprinting?

- Nell: What happens for non exclusive sessions?

- Question: Is there any dialog needed to request exclusive sessions?

- Nell: Edge does.

- Brandon: Chrome requires user interaction. This could change to a
specific permission request.

- David: Will learn how other specs specify permissions to write the
correct  text in the WebXR spec too.

- Brandon: Think that a generic model could work for most cases.

- Kip: The models make sense to train in muscle memory. Once they know
where things are in the controller, they might be focusing more in the ray
(solved by the current proposal) than knowing the specifics of where things
are in the controller. There could be a permission request to get access to
all the information of the controller and it could be provided there to be
able to render the correct model.

- Brandon: The idea is not to continue supporting the gamepad API approach.
Maybe when we tackle the more verbose API to provide all the
functionalities of the controller and then maybe the gamepad API could be
revised. The current proposals in the native side do not fit well in the
gamepad API.

- Nell: Because there are a lot of unknowns, let’s try to stabilize the
things  we have more information/understand better. It might seem that
getting rid of the current WebVR functionalities, might be a step backwards
but the current APIs make it really hard to build things with them. The
approach here is to try not to have to come back later on, but what is
proposed should be still functional/work 6 months, years from now, even if
there are more explicit ways to get the information.

- Brandon: Completely agreed and that is what the input current simplified
approach is trying to achieve.

- Brandon: Wanted to ask about merging the deferred sessions PR <
https://github.com/immersive-web/webxr/pull/256>, but David added a bunch
of comments just prior to the call that should be addressed first. Please,
review the comments and the PRs (pushed an hour ago).

- Brandon: Last item that wanted to highlight is the possibility of using
the WebXR API in a Worker.

- Raphael: If we would like to be able to use this on a worker, instead of
using a very limited version of the API on a worker, it actually should be
the whole API.

- Brandon: The developer could start the API in a worker (access the whole
API) and pass the information back to the main thread to be able to render
finally.

- Kip: There could be some use cases where complex computation could be
done in the worker and still, the output of that complex computation being
passed back to the main thread for rendering. This could make sense in some
scenarios.

- Brandon: The output of the rendering as a transferrable image bitmap
could always be transferred from a process to another and thus, one worker
could continue the work of another. This is possible. But the main idea
here is not to make the elements the API can control transferrable
(session, frame, …).

- Raphael: Remember me, can the get pose be called outside of the raf?

- Brandon: No. Maybe the explanation in the spec could be revised but no,
the pose will only be available in the frame passed in the raf.

- Raphael: Where is the commit conversation?

- Brandon: Not sure where this is at the moment.

- Nell: Did the concern from the emscriptten guys? Would like to see if
their concerns are being addressed.

- Ada: What is the purpose of using a worker then?

- Brandon: Mostly to avoid congestion on the main thread, so XR content
isn’t slowed down by things like ads. Some desirable scenarios, like video,
won’t work well in workers yet because they lack a way to synchronize video
frames with audio.

- Nell: Using different threads for simulation.

- Ada: There might be a need to be able to select which worker is executed
in which core go the device.

- Brandon: There are plenty of specific scenarios. We should try to get the
API to do the logical thing and then treat every specific situation as they
come up.

- Kip: A possible solution could be to force a context loss if the calls
are not made in the required/expected way. But it will require that we all
agree on what that means.

Received on Sunday, 25 February 2018 17:55:21 UTC