Re: thoughts towards a draft AR WG charter

On 7/29/2010 8:14 PM, Thomas Wrobel wrote:
> " (c) user's device capabilities (can render/view the 3D object live,
> can only support 2D view, etc)"
>
> Maybe I've just got this stuck in my head now too much but couldn't
> this *also* be a trigger type exactly like GPS/Image/Other?
>
> So the creator could specify one set of data when the user is in  (GPS
> co-ordinates)&  (User is running 3d renderer).
> And a different set when the user is in the same set of co-ordinates,
> but using just a 2D viewer, like an over head map.
>
> A bit like "@media screen" and "@media print" in CSS. Only this would
> be specific to the data/element links.
>

Hi Thomas,

Yes, thank you for the clarification. I guess this is a question of how 
much is efficient to store in a trigger before the whole system becomes 
cumbersome. These are precisely the reasons that the W3C requires 
recommendations to be implemented and "field tested"!

When you say above "creator," I think you are using the term 
interchangeably with what I called "publisher" of the trigger and 
associated data for an AR experience.

In some cases, the publisher of the content (e.g., a city tourism 
office, an interactive novelist or game company) could be different from 
the person who uses experience design and "production" tools for graphic 
artists to synthesize (there must be a better word) the digital objects. 
The second set of people have other labels, such as "interaction 
architect," and "creative professional."

Yes, the publisher should be able to designate different user 
experiences, depending on the capabilities of the device, even to the 
point of sending, when a field is empty, a message such as "the data 
associated with this trigger (these conditions) is not available for 
viewing on your current device from the publisher you have selected."

Christine


>
> 2010/7/29 Christine Perey<cperey@perey.com>:
>> Hi Raphael,
>>
>> I don't know enough about RDF (but Matt did mention it in our conversation)
>> to be able to reply to your question. I'm confident that there are RDF
>> experts on this list.
>>
>> I wish I could express this mathematically, or using code.
>>
>> In my view:
>>
>> - an event or a trigger (as we described it earlier, the detection of a set
>> of conditions in the user's environment via the use of sensors), when
>>
>>   (a)  it matches a trigger (the set of conditions specified by a publisher)
>> in a database, and
>>   (b)  the AR support (software and in some cases hardware) is available in
>> the user's device,
>>
>> should return (send via an IP network if it is in the cloud, retrieve if it
>> is stored locally) to the user's device that digital data with which the
>> trigger was previously associated (by a publisher, in the broadest sense of
>> the word).
>>
>> The data would be "displayed" (visualized but also possibly including an
>> audio file plays) in accordance with
>>
>>   (a)  the user preferences,
>>   (b) conditions of use (e.g, did the user pay a subscription to the
>> "history" database or to the "tourism" database, or to the "commerce"
>> database, or all three or more? this involves some rights management)
>>   (c) user's device capabilities (can render/view the 3D object live, can
>> only support 2D view, etc)
>>   (d) user's environment (time of day, noisy, quiet, light, dark, etc).
>>
>> The details of the "display," or representation step, are the "user
>> experience" and should (in my opinion) be defined entirely by the developer
>> of the browser or application.
>>
>> I would not suggest that the representation need to be standardized further
>> (at this time) since there are already many widely-adopted display options
>> available.
>>
>> I believe we need to develop the standard(s) such that when a trigger is
>> associated with a set of data (using the open AR Data format in a database)
>> by a publisher, any device with AR support and with which the user has the
>> rights to receive said publisher's data (see "b" above), can "display" the
>> data in the real world context.
>>
>> This would increase the content publisher's appetite for publishing once to
>> many audiences/viewers under conditions specified by the publisher (e.g.,
>> show my UGC AR post only to my friends). I believe this is in the spirit of
>> the W3C and the Web.
>>
>> Further open standard for AR data format reduces the need for (any benefits
>> associated with) building proprietary (closed) silos (platforms) for AR data
>> publishing and viewing.
>>
>> --
>> Christine
>>
>> Spime Wrangler
>>
>> cperey@perey.com
>> mobile +41 79 436 68 69
>> VoIP (from US) +1 (617) 848-8159
>> Skype (from anywhere) Christine_Perey
>>
>> On 7/29/2010 1:29 PM, Raphaël Troncy wrote:
>>>
>>> +1 (as a lurker but also interested person and familiar with W3C groups
>>> and policy, still chairing other WG)
>>>
>>>> Matt and I also explored how the element of time (when was the trigger?
>>>> when did the event occur, is it during opening hours of a business?) can
>>>> be part of the data which is used to retrieve the resulting
>>>> output/linked data.
>>>
>>> I'm particularly interested in representing this type of event data to
>>> attach to a trigger. When you say "to retrieve the resulting
>>> output/linked data", should we understand "linked data" as the
>>> technology promoted by W3C (aka RDF)? If yes, then my +1 becomes a +1000
>>> :-)
>>> Cheers.
>>>
>>> Raphaël
>>>
>>
>
>

Received on Friday, 30 July 2010 06:57:03 UTC