Re: Modules split

On Sat, Aug 3, 2019 at 11:39 AM Klaus Weidner <klausw@google.com> wrote:

> On Sat, Aug 3, 2019 at 12:49 AM Rik Cabanier <rcabanier@magicleap.com>
> wrote:
>
>> On Sat, Aug 3, 2019 at 12:33 AM Klaus Weidner <klausw@google.com> wrote:
>>
>>> On Sat, Aug 3, 2019, 00:16 Klaus Weidner <klausw@google.com> wrote:
>>>
>>>> I'm not one of the spec editors, but is this really a blocking issue?
>>>> The spec already says that *"Future specifications or modules may
>>>> expand the definition of immersive session include additional session
>>>> modes"*, and I think the initial AR module draft is starting
>>>> imminently. Presumably the browser police won't be confiscating headsets
>>>> for non-compliance if they implement a mode from a pending draft module
>>>> that isn't in the draft core spec?
>>>>
>>>
>>> Sorry, I didn't mean to imply that standards compliance is unimportant.
>>> It would be unfortunate if there were an extended gap where core WebXR is
>>> final but the AR module isn't ready yet even for minimal "poses only" use
>>> cases, but my impression is that the editors and working group are trying
>>> their best to avoid that. At this point it's all technically still in draft
>>> status.
>>>
>>
>> No worries! :-)
>> Yes, I'd prefer if it goes in the spec since we don't know how long the
>> AR module will take. We will be telling authors to use 'immersive-ar' and
>> they might (rightly) be concerned that this is not in the standard.
>>
>> I'm concerned that the explainer is encouraging authors to request vr on
>> ar devices and look at the environmentBlendMode attribute. We definitely
>> don't want to support this and I suspect Microsoft will feel the same for
>> the Hololens.
>>
>> What are '"minimal "poses only" use cases'?
>>
>
> (Standard disclaimer, these are my unofficial opinions and interpretations
> of the spec and process.)
>
> What I meant is that taking the current core spec and just adding an
> "immersive-ar" mode results in an AR mode is extremely similar to
> "immersive-vr" with a transparent background. Basically the app is just
> getting poses relative to reference spaces but doesn't have any significant
> real-world understanding. At most it can get a floor level by using a
> "local-floor" reference space, but that's originally intended for a
> limited-size space,
>

Yes, and that is totally fine. We already have several people building out
XR experiences on our device and they are quite happy to work within the
current limitations.



> not walking-around AR ("*the user is not expected to move beyond their
> initial position much, if at all"*) and can't cope with not-quite-flat
> environments.
>

I suspect the vast majority of AR web experience will not have the user
walk around a lot.


> (I think issuing "reset" events when the floor level changes wouldn't be
> in the spirit of the spec). In "unbounded" space, there's no floor level
> available to the app. An app could request both reference spaces and assume
> that the "local-floor" level is valid globally, but that doesn't seem like
> a safe assumption.
>
> I was under the impression that some form of hit testing or other
> real-world understanding such as planes or meshes is fairly essential for
> AR applications, i.e. to support interacting with tables or walls, so I
> thought the AR module was aiming to have something along these lines
> included. If you think that it would already be very useful for AR headsets
> to have a minimal AR mode without real-world understanding to avoid being
> in unspecified territory, would it help to start with a very small
> "poses-only AR" module that basically just introduces "immersive-ar" and
> explanations around environment blending etc., but skipping XRRay and
> anything related to hit testing or other real-world understanding? If yes,
> I think this would be a useful discussion to have.
>

Correct. Some people just want an AR experience where they can set up a
scene and walk around it with potentially some simple click/hover
interactions.

As a user of our AR platform, I almost never interact with the real world
(ie snapping to or placing things on real world object)


> It doesn't look good if a customer asks if we support WebXR and we say
>> that it only works if they use a non-standard extension...
>>
>
> That's kind of the question here - would apps really be able to work just
> with core WebXR + "immersive-ar" for poses, or would they need real-world
> understanding in some form also, in which case they'd potentially be back
> to using not-yet-standard extensions? Is your point that it would be
> important to distinguish this, i.e. by being able to say that it is indeed
> using standard core WebXR + a standard "poses-only AR" module that provides
> "immersive-ar", and there's also separate support for draft
> real-world-understanding modules such as hit testing, plane detection,
> anchors, etc.?
>

Correct.

Received on Saturday, 3 August 2019 19:44:41 UTC