Re: thoughts towards a draft AR WG charter

On 7/29/2010 5:17 PM, Rob Manson wrote:
>> I wish I could express this mathematically, or using code.
>> - an event or a trigger (as we described it earlier, the detection of a
>> set of conditions in the user's environment via the use of sensors), when
>>
>>     (a)  it matches a trigger (the set of conditions specified by a
>> publisher) in a database, and
>>     (b)  the AR support (software and in some cases hardware) is
>> available in the user's device,
>
> To me this maps very closely to sensory perception at a digital level
> and I think that's exactly what AR represents.

Good input.

>
>> should return (send via an IP network if it is in the cloud, retrieve if
>> it is stored locally) to the user's device that digital data with which
>> the trigger was previously associated (by a publisher, in the broadest
>> sense of the word).
>
> Also "or access as a stream from a local sensor (e.g. camera or even a
> VOIP connection)."
>

here, do you propose that the data returned or "played" can, optionally, 
be a real time dynamic data stream?

I had not thought through what might be supported beyond pre-compiled 
"static" and "interactive" assets in a database.

Use case: musicians performing live concert (but not their background) 
could be "super imposed" into user's environment with natural 
occlusions, light/shadows and stereo acoustics on local setting?


>
>> The data would be "displayed" (visualized but also possibly including an
>> audio file plays) in accordance with
>>
>>     (a)  the user preferences,
>>     (b) conditions of use (e.g, did the user pay a subscription to the
>> "history" database or to the "tourism" database, or to the "commerce"
>> database, or all three or more? this involves some rights management)
>>     (c) user's device capabilities (can render/view the 3D object live,
>> can only support 2D view, etc)
>>     (d) user's environment (time of day, noisy, quiet, light, dark, etc).
>
> Also "User preferences may be allowed to be overridden to artificially
> adjust a users context (e.g. exploring the past or other locations,
> etc.)."
>
>
>> The details of the "display," or representation step, are the "user
>> experience" and should (in my opinion) be defined entirely by the
>> developer of the browser or application.
>
> I think it's really important that both layers are modeled and
> discussed.  e.g. browser developer and application developer (of course
> as well as publisher and author).
>

yes, I think my point is that the standards we currently have for the 
representation to the user may suffice/be extendable without creating a 
new viewing standard for AR services/experiences.

>
>> I believe we need to develop the standard(s) such that when a trigger is
>> associated with a set of data (using the open AR Data format in a
>> database) by a publisher, any device with AR support and with which the
>> user has the rights to receive said publisher's data (see "b" above),
>> can "display" the data in the real world context.
>
> I think you should drop the "in the real world context" part.  The
> interaction between "real world" and "remote/online" is critical too and
> should not be excluded.
>

I'm not clear what you mean here  by "remote/online is critical too."

Interaction or displaying the data in the real world context is 
fundamentally what separates Augmented Reality from Virtual Reality.

>
>> This would increase the content publisher's appetite for publishing once
>> to many audiences/viewers under conditions specified by the publisher
>> (e.g., show my UGC AR post only to my friends). I believe this is in the
>> spirit of the W3C and the Web.
>
> Strong +1
>
>
> Great to see such a good discussion so far 8)
>
>

Received on Friday, 30 July 2010 07:12:47 UTC