Re: The WG's Three Letters

On Fri, 2010-08-06 at 19:34 +0200, Thomas Wrobel wrote:
> [Real object]  << [Image look up] >> [Virtual Data]   would be the
> systemic link needed. And then you have an (optional) output of a AR
> object being aligned to the images position.
> 
> Its the link between the real and virtual that seems the "core" of
> whats being done here, rather then a just link to space/time points.

I really strongly agree with this point Thomas...and I think that's
what's been causing me the most discombobulation 8)

And after much thought I have a cunning plan I'd like to propose.

First let me state my underlying assumptions.

        1. Points of Interest is too limited.  It implies just one type
        of geometry - a point.
        
        2. The term POI has stuck in people's minds and is now the
        default entry point for a lot of AR.
        
        3. AR is a REALLY broad domain that encompasses all sorts of
        things - including many we probably won't get agreement on or
        just plain haven't thought of yet.

In summary, AR is too broad and Point of Interest is too limited.

So...here's my proposal..

=======================================================================
I believe we should setup a POI WG where the term POI stands for
"Patterns Of Interest".
=======================================================================

The scope of this group would then be to define standards/specifications
for linking sensor/y data patterns to content.

The starting point for this group would be the example pattern of GPS
derived lat, lon & optional orientation/altitude/time.  That is what we
currently all seem to accept as the heart of a Point of Interest.

This would then clearly position our work as a specific integration task
built upon the work of the Semantic Sensor Networks group [1] and the
Linked Data group [2], etc.

We would not be re-creating the wheel from these groups
perspective...just integrating their existing work into the very
specific perspective of AR.
NOTE: I'm not suggesting this will be trivial...just clarifying our
position relative to these other groups and their work.

Our goal would not be to define a broad standard that covered all of AR
(probably impossible).  However, we would be creating an open ended
specification that was a key enabler for AR in general.

We would be able to start with concrete, commonly accepted examples and
use cases right now.  Yet still leave the door open to a much broader
and richer direction.

So the 3 key elements of the standards/specifications we would be
working towards would be [sensor/y data], [content] and the [links]
between them.  The [sensor/y data] would be based upon and informed by
the work from the SSN and Capture API [3] groups.  The [links] would be
based upon the LLD and related groups work.  And the content would be
the open ended presentation layer built upon the wealth of existing web
and more specifically 3D and audio content standards.

NOTE: I have some existing analysis of current systems and structural
modes that I'd be happy to contribute to this discussion too if we agree
to head in this direction.  I think some of this clearly highlights how
our data processing/value chain creates a very different perspective
from how these other groups and web technologies in general currently
see the world.


So, to some extent this is a bit of linguistic slight of hand/tongue.
However, by just changing one word I think it would allow us to embrace
the broad appeal of the term POI while still addressing the even broader
needs of a future facing AR related standard.  It would allow us to be
more focused than the overwhelming term AR suggests, while enabling a
really key pillar that supports it.


I hope I've communicated this idea clearly and I'll look forward to
hearing everyone's feedback/thoughts.


roBman

[1] http://www.w3.org/2005/Incubator/ssn/charter
[2] http://www.w3.org/2005/Incubator/lld/ 
[3] http://www.w3.org/TR/2010/WD-capture-api-20100401/ 

Received on Sunday, 8 August 2010 03:41:16 UTC