[Fwd: POI based Open AR proposal]

Hi,

first let me apologise for the cross-posting but as you'll see from the
content below this proposal is directly related to your SSN groups work.

I'm obviously coming at this from a different angle/perspective but I
think there's a natural role for the SSN XG in shaping this new AR
standard.

I'd love to hear your thoughts on this approach and how you think it
could benefit from or integrate with the existing work you've done.


-- 
Rob Manson
Managing Director

MOB - start something!

The Mobile & Online Business innovation lab
http://mob-labs.com

m: +61423215731
e: roBman@mob-labs.com
l: http://www.linkedin.com/in/robertmanson
t: http://twitter.com/nambor
s: http://slideshare.net/robman 


----------------------------------------------------------------------------------
I'm co-chairing the International Augmented Reality Standards Workshop in Seoul 
on Oct the 11th and 12th http://ht.ly/2nnEH 
If you're interested in the future of AR come along and participate.  
Attend the MobileAR Summit, http://ismar10.org and http://iswc.net too. 
----------------------------------------------------------------------------------

Forwarded message 1

Hi,

great to see we're onto the "Next Steps" and we seem to be discussing
pretty detailed structures now 8)  So I'd like to submit the following
proposal for discussion.  This is based on our discussion so far and the
ideas I think we have achieved some resolution on.

I'll look forward to your replies...

roBman

PS: I'd be particularly interested to hear ideas from the linked data
and SSN groups on what parts of their existing work can improve this
model and how they think it could be integrated.



What is this POI proposal?
A simple extension to the "request-response" nature of the HTTP protocol
to define a distributed Open AR (Augmented Reality) system.
This sensory based pattern recognition system is simply a structured
"request-response-link-request-response" chain.  In this chain the link
is a specific form of transformation.

It aims to extend the existing web to be sensor aware and automatically
event driven while encouraging the presentation layer to adapt to
support dynamic spatialised information more fluidly.

One of the great achievements of the web has been the separation of data
and presentation. The proposed Open AR structure extends this to
separate out: sensory data, triggers, response data and presentation.

NOTE1: There are a wide range of serialisation options that could be
supported and many namespaces and data structures/ontologies that can be
incorporated (e.g. Dublin Core, geo, etc.).  The focus of this proposal
is purely at a systemic "value chain" level.  It is assumed that the
definition of serialisation formats, namespace support and common data
structures would make up the bulk of the work that the working group
will collaboratively define.  The goal here is to define a structure
that enables this to be easily extended in defined and modular ways.

NOTE2: The example JSON-like data structures outlined below are purely
to convey the proposed concepts.  They are not intended to be realised
in this format at all and there is no attachment at this stage to JSON,
XML or any other representational format.  They are purely conceptual.

This proposal is based upon the following structural evolution of
devices and client application models:

  PC Web Browser (Firefox, MSIE, etc.):
    mouse      -> sensors -> dom      -> data
    keyboard   ->                     -> presentation

  Mobile Web Browser (iPhone, Android, etc.):
    gestures   -> sensors -> dom      -> data
    keyboard   ->                     -> presentation

  Mobile AR Browser (Layar, Wikitude, Junaio, etc.):
    gestures   -> sensors -> custom app            -> presentation [*custom]
    keyboard   ->                                  -> data [*custom]
    camera     ->
    gps        ->
    compass    ->

  Open AR Browser (client):
    mouse      -> sensors -> triggers ->  dom      -> presentation
    keyboard   ->                                  -> data
    camera     ->
    gps        ->
    compass    ->
    accelerom. ->
    rfid       ->
    ir         ->
    proximity  ->
    motion     ->

NOTE3: The key next step from Mobile AR to Open AR is the addition of
many more sensor types, migrating presentation and data to open web
based standards and the addition of triggers.  Triggers are explicit
links from a pattern to 0 or more actions (web requests).

Here is a brief description of each of the elements in this high level
value chain.

clients:
- handle events and request sensory data then filter and link it to 0 or
more actions (web requests)
- clients can cache trigger definitions locally or request them from one
or more services that match one or more specific patterns.
- clients can also cache response data and presentation states.
- since sensory data, triggers and response data are simply HTTP
responses all of the normal cache control structures are already in
place.

infrastructure (The Internet Of Things):
- networked and directly connected sensors and devices that support the
Patterns Of Interest specification/standard


patterns of interest:
The standard HTTP request response processing chain can be seen as:

  event -> request -> response -> presentation

The POI (Pattern Of Interest) value chain is slightly extended.
The most common Mobile AR implementation of this is currently:

  AR App event -> GPS reading -> get nearby info request -> Points Of Interest response -> AR presentation

A more detailed view clearly splits events into two to create possible
feedback loops. It also splits the request into sensor data and trigger:

                +- event -+               +-------+-- event --+
  sensor data --+-> trigger -> response data -> presentation -+

- this allows events that happen at both the sensory and presentation
ends of the chain.
- triggers are bundles that link a pattern to one or more actions (web
requests).
- events at the sensor end request sensory data and filter it to find
patterns that trigger or link to actions.
- these triggers or links can also fire other events that load more
sensory data that is filtered and linked to actions, etc.
- actions return data that can then be presented.  As per standard web
interactions supported formats can be defined by the requesting client.
- events on the presentation side can interact with the data or the
presentation itself.

sensory data:
Simple (xml/json/key-value) representations of sensors and their values
at a point in time.  These are available via URLs/HTTP requests
e.g. sensors can update these files on change, at regular intervals or
serve them dynamically.
{
  HEAD : {
    date_recorded : "Sat Aug 21 00:10:39 EST 2010",
    source_url : "url"
  },
  BODY : {
    gps : {  // based on standard geo data structures
      latitude : "n.n",
      longitude : "n,n",
      altitude : "n",
    },
    compass : {
      orientation : "n"
    },
    camera : {
      image : "url",
      stream : "url"
    }
  }
}
NOTE: All sensor values could be presented inline or externally via a
source URL which could then also reference streams.

trigger:
structured (xml/json/key-value) filter that defines a pattern and links
it to 0 or more actions (web requests)
[
  HEAD : {
    date_created : "Sat Aug 21 00:10:39 EST 2010",
    author : "roBman@mob-labs.com",
    last_modified : "Sat Aug 21 00:10:39 EST 2010"
  },
  BODY : {
    pattern : {
      gps : [
        {
          name : "iphone",
          id : "01",
          latitude : {
            value : "n.n"
          },
          longitude : {
            value : "n.n"
          },
          altitude : {
            value : "n.n"
          }
        },
        // NOTE: GPS value patterns could have their own ranges defined
        //       but usually the client will just set it's own at the filter level
        // range : "n",
        // range_format : "metres"
        // This is an area where different client applications can add their unique value
      ],
      cameras : [
        {
          name : "home",
          id : "03",
          type : "opencv_haar_cascade"
          pattern : {
            ...
          }
        }
      ]
    },
    actions : [
      {
        url : "url",
        data : {..},  // Support for referring to sensor values $sensors.gps.latitude & $sensors.compass.orientation
        method : "POST"
      },
    ]
  }
]

data
HTTP Responses

presentation
client rendered HTML/CSS/JS/RICH MEDIA (e.g. Images, 3D, Video, Audio,
etc.)



At least the following roles are supported as extensions of today's
common "web value chain" roles.

        publishers:
        - define triggers that map specific sensor data patterns to
        useful actions (web requests)
        - manage the acl to drive traffic in exchange for value creation
        - customise the client apps and content to create compelling
        experiences
        
        developers:
        - create sensor bundles people can buy and install in their own
        environment
        - create server applications that allow publishers to register
        and manage triggers
        - enable the publishers to make their triggers available to an
        open or defined set of clients
        - create the web applications that receive the final actions
        (web requests)
        - create the clients applications that handle events and map
        sensor data to requests through triggers (Open AR browsers)
        
        

Received on Tuesday, 24 August 2010 02:59:59 UTC