[whatwg] Dealing with Stereoscopic displays

Hi again,

Thanks for the replies and informed feedback. I wasn't fully aware of
the significance of the <replicate> proposal, and I like what it
potential offers for my use case, but the hardware control problem
remains. Essentially the UC is this;

USE CASE: A web developer wants to utilise the next generation of
stereoscopic displays (for arguments sake we assume that these are
going to become ubiquitous as quickly as LCD flat-screens did) for UIs
which create an impression of depth ("coverflows", "time-machines",
head-up-displays, etc.)

SCENARIOS:

    * A user visits the National Museum site and wants to see a
time-machine view of objects in the collection with a sense of 3D
depth based on their age
    * Her PC is connected to a stereoscopic screen but the web
application can't know the details of the implementation: Anaglyph
glasses, polarising glasses, lenticular cover etc.
    * The web page has a <device> selector with type = stereo_display
(?) which detects / gives access to the stereo functions of the
display - i.e. turns on whatever feature gives stereopsis
    * The UA  has awareness of a left and right render path for two
widows / documents but "knows" that these are stereoscopically linked
(is this sensible ?)
    * The web application now has two render targets
    * The web application now generates slightly different left eye
and right eye views
    * The UA renders the two documents in the correct window

REQUIREMENTS:

    * Stereo displays should be discoverable (through <device> ?)
    * Stereo displays should be controllable by the UA (again through
<device> ?).
    * Scripts should have access to both render targets

Any suggestions / criticisms?

Regards,

Eoin.


On Tue, Apr 27, 2010 at 3:15 AM, ddailey <ddailey at zoominternet.net> wrote:
> No it isn't simple. Allied issues have been discussed here before.
>
> As the nature of input devices become richer (e.g. eye movement glasses that
> give binocular disparity data to the display device) then the nature of the
> convergence data that defines the scene becomes more relevant to its primary
> "semantics". ?As SVG and 3D technologies begin to bridge the gap between 2
> and 3D (cf. the <replicate> proposal [1] or [2] ) the distinction between
> styling and markup so tenaciously held in HTML may cease to be so clearcut.
>
> cheers
> David
>
>
> [1]
> http://old.nabble.com/A-proposal-for-declaritive-drawing-(%3Creplicate%3E)-to-be-added-into--SVG-td28155426.html
> [2] http://srufaculty.sru.edu/david.dailey/svg/SVGOpen2010/replicate.htm
>
>
> ----- Original Message ----- From: "David Singer" <singer at apple.com>
> To: <whatwg at lists.whatwg.org>
> Sent: Monday, April 26, 2010 8:02 PM
> Subject: Re: [whatwg] Dealing with Stereoscopic displays
>
>
>
> I agree that this probably means that web elements that are 'flat' would be
> styled by CSS with a depth. ?This is important if other material presented
> to the user really is stereo (e.g. a left/right eye coded movie). ?The movie
> will be set so that the eyes are expected to have a certain 'convergence'
> (i.e. they are looking slightly inward towards some point) and it's
> important that if material is overlaid on that. it has the same convergence.
> Obviously, this is unlike the real world where focus distance and
> convergence distance are the same (focus distance is fixed at the screen
> distance), but the brain can get very confused if two things that are both
> in focus are at difference convergence distances.
>
> This is not a simple question, as I expect you are beginning to realize.
>
> David Singer
> Multimedia and Software Standards, Apple Inc.
>
>
>
>



-- 
Eoin Kilfeather
Digital Media Centre
Dublin Institute of Technology
Aungier Street
Dublin 2
m. +353 87 2235928
skype:ekilfeather

Received on Tuesday, 27 April 2010 01:38:07 UTC