Re: thoughts towards a draft AR WG charter

Hi,

> >> should return (send via an IP network if it is in the cloud, retrieve if
> >> it is stored locally) to the user's device that digital data with which
> >> the trigger was previously associated (by a publisher, in the broadest
> >> sense of the word).
> >
> > Also "or access as a stream from a local sensor (e.g. camera or even a
> > VOIP connection)."
> >
> 
> here, do you propose that the data returned or "played" can, optionally, 
> be a real time dynamic data stream?

Yes.


> I had not thought through what might be supported beyond pre-compiled 
> "static" and "interactive" assets in a database.

Sensor input is definitely the key distinction within the trigger set
(e.g. a certain combination of GPS lat/lon and Compass relative magnetic
orientation in the case of Mobile Geolocative AR).  But sensors are also
a valid "source" of content in themselves.  The most obvious is a video
stream.  But the internet of things suggests all sorts of streamed
sensor data may become useful.


> Use case: musicians performing live concert (but not their background) 
> could be "super imposed" into user's environment with natural 
> occlusions, light/shadows and stereo acoustics on local setting?

That's one option.  Another is watching the heartbeat sensor info
overlaid on top of a competing athlete (e.g. in the Olympics).  Lots of
privacy issues here 8) but I believe the scenario is valid.


> >> I believe we need to develop the standard(s) such that when a trigger is
> >> associated with a set of data (using the open AR Data format in a
> >> database) by a publisher, any device with AR support and with which the
> >> user has the rights to receive said publisher's data (see "b" above),
> >> can "display" the data in the real world context.
> >
> > I think you should drop the "in the real world context" part.  The
> > interaction between "real world" and "remote/online" is critical too and
> > should not be excluded.
> >
> I'm not clear what you mean here  by "remote/online is critical too."
> 
> Interaction or displaying the data in the real world context is 
> fundamentally what separates Augmented Reality from Virtual Reality.

Agreed that that's what distinguishes AR...but that doesn't mean data
from the AR layers/augmented data sets shouldn't be able to leak back
into the traditional internet too.  

It may appear obvious that this is the case...but "in the real world
context" suggests this is excluded.

Use case: User sitting at their PC can watch representations of AR user
actions.  This can be presented in any web format (e.g. maps, lists,
dynamic UI, stream, etc.).

Inverse use case: User sitting at their PC can add or update content
that is then visible by the AR user.

A lot of the diffusion modeling we've done for our different AR projects
have relied on this type of extended interaction.

Hope this is a little clearer...happy to refine the language if not.


roBman

Received on Friday, 30 July 2010 19:43:41 UTC