Re: POI based Open AR proposal

On Sep 4, 2010, at 4:46 AM, Christine Perey wrote:

> 
> On 9/3/2010 4:51 PM, Jens de Smit wrote:
> <snip>
>> I think for the foreseeable future this issue will extend to any sensor
>> input we collect: it is known that almost any measuring device has a
>> certain inaccuracy, and this accuracy can vary with environmental
>> conditions such as light, temperature, humidity, amount of GPS
>> sattelites etc. It may be prudent for content providers and/or AR
>> clients to adjust the experience they offer based on the level of
>> accuracy that has been registered.
>> 
>> My question is: do we need to take this issue, that I believe is
>> inherent in any digital-reality mixing application and therefore very
>> relevant to AR, into account when doing our work?
> 
> Hi Jens,
> 
> This is not an answer to your very, very excellent question but a tangent which might shed some light on a future feature.
> 
> I think, with some sensors in some situations, there is an option of having the user interact with the client (user agent) to assist in improving accuracy. This is a feature in the application "Swiss Peaks" which has been released by a group in Information Management at the ETH Zurich on Android and iPhone. http://peaks-app.ch/
> 
> They also have a Layar layer.
> 
> I'm aware of the fact that there are many of these Peak finder apps out now but the point I want to make is simply that there is an option in the application where the user can touch and drag the summit right or left to help with the geo-positioning when the GPS is "off".
> 
> Is this common place?
I would argue that AR standards must include some means for authors to understand and respond to changing accuracy information.
The method of manually calibrating the content against the visual scene could certainly crowd-source corrections to the GPS readings at that location (like the calibration maps that magnetic tracker systems use) or corrections to the location/position of the POI/content (two crowd-sourced corrections to an object location from two angles is all that is required to lock down depth).
I wouldn't advocate trying to include it in any early specification for open AR because the interplay between uncertain device accuracy and uncertain object location is complex.
There is no notion of inaccuracy in web presentation standards today (unless you consider width:"80%"), but I think it should be a part of an Open AR Standard.
Our KHARMA framework [1] is using manual calibration; we publish a list of known surveyed points around the user.
The user can move to one of these points (found via picture and description) and then override the GPS.
The same technique can obviously be applied indoors.
Once GPS is locked down, there will then be orientation errors, which we compensate for by replacing the video at that location with a static panorama. [2]
While this approach is crude, it does give the user a manual override and, most importantly, it allows us to feedback a changing accuracy to the author.
When only using native GPS, the author can show floating labels (perhaps with visual indications of accuracy).
When GPS has been locked down by the user, the author can be more aggressive about showing content nearby the user.
And when the orientation has been locked down, the author may decide that content aligned with buildings is appropriate (occlusions with buildings are likely to look terrible until we have accurate registration). [3]
One feature we like is that authors can view the database of these surveyed GeoSpots offline and tailor their authoring to known locations/conditions.
The same applies for the panoramic backdrops.
> 
> If there are registration problems with other sensors, can there be a user option to make a manual adjustment? the really, really compelling feature would be for the client application to then bring some information back to the AR meta data so that, in effect, crowds help the information to become more accurate over time (this is not the case with the GPS and peaks example, since, presumably the peaks don't move a great deal, but could be valuable for some interior navigation or other AR visual overlay applications).
> 
> -- 
> Christine
> 
> Spime Wrangler
> 
> cperey@perey.com
> mobile +41 79 436 68 69
> VoIP (from US) +1 (617) 848-8159
> Skype (from anywhere) Christine_Perey
[1] https://research.cc.gatech.edu/kharma/
[2] https://research.cc.gatech.edu/kharma/content/centenial-park-granite
[3] https://research.cc.gatech.edu/kharma/content/infrastructure-service

Alex Hill Ph.D.
Postdoctoral Fellow
Augmented Environments Laboratory
Georgia Institute of Technology
http://www.augmentedenvironments.org/lab

Received on Monday, 6 September 2010 14:14:19 UTC