- From: J. Andrew Rogers <andrew@jarbox.org>
- Date: Wed, 23 May 2012 12:18:37 -0700
- To: Martin Lechner <martin.lechner@wikitude.com>
- Cc: discussion@arstandards.org, public-ar@w3.org
On May 23, 2012, at 7:14 AM, Martin Lechner wrote: > Hi Andrew! > > Agreed, and it is on our radar. We haven't had a deep discussion on streaming capabilities in the group yet, thou. > Depends what someone would expect from a streaming mechanism. It's obvious that streaming of 3D models should be available, as well as fetching AR data on demand. > > A simple usecase might be: When clicking on one of the virtual objects, a new scene is constructed, with a higher LOD for that particular region. > As we allow scripting components in ARML, streaming could be handled with AJAX calls (which would come pretty much out of the box when a fully fledged JavaScript engine is used on the implementation). However, streaming might also make sense in the descriptive part of the language. KML allows for Regions and NetworkLinks which allow streaming when used together, but I agree that it's sometimes troublesome to use. Yeah, I am familiar with these client-pull models but what I need is more along the lines of efficient server-push. Reality is more like a movie than a document. :-) When you mention a Javascript engine, are you referring to client-controlled Javascript engines embedded on the server side? > Do you have any additional input from your side what you would expect from a streaming mechanism in ARML? We willl take this as an input to the group and discuss. Sure. I do not have any specific recommendations but I can give concrete examples of how high-end systems are actually architected and how that impacts interfaces. We have a "mirror world" (MW) that is distributed over a large cluster of servers. This is a real-time model of reality, involving data fusion of diverse sensing, Internet, machine-generated, and other data sources in a seamless geodetic space-time context. To keep this MW up-to-date, we are ingesting and indexing millions of complex geometries every *second*. This will likely be scaled up by a few more orders of magnitude in the future. This creates challenges for user interfaces, and AR is an obvious and natural UI for some apps. It can drive AR with rich, very live updates from analysis of what is in your view. The efficient way to interact with the MW is to create a constraint that lives in the MW that defines where you are and what you can see with the client steering the constraint. That constraint captures and manages all of the updates that apply to a client's view and those updates are pushed to the client. Polling a constraint (e.g. a conventional query) in the MW to maintain a client view is quite inefficient for both client and server and it also limits the capabilities of the client relative to what can be done by streaming from a client-controlled constraint in the MW. The server side can potentially do a ton of intelligent heavy lifting as the MW is updated. Obviously it is possible to support simple polling AR by building a proxy server that manages the streaming feed from the constraint. That is far from ideal and is a lossy layer of indirection, it would be better for a client to work with the stream directly and push as much of the work as possible into the server side. I am not sure how we would fit this into an existing standard but I suspect that over the long term AR systems will look more like this architecture. Tangentially, and I am probably not the first person to make this observation, XML is a very expensive representation for real-time or streaming data models of this nature. Pretty un-"green" in terms of how that impacts computational costs. The above system had a standards-based XML protocol option in the original design but it damaged the overall performance and efficiency numbers so badly relative to other representations that we eventually dropped it from the core altogether. Cheers, -- J. Andrew Rogers Twitter: @jandrewrogers
Received on Thursday, 24 May 2012 13:23:14 UTC