W3C home > Mailing lists > Public > public-webapi@w3.org > December 2007

accessibility: vision as input device

From: ~:'' ありがとうございました。 <j.chetwynd@btinternet.com>
Date: Sat, 29 Dec 2007 23:38:51 +0000
Message-Id: <DD7B7C2D-5B42-43AB-BF52-27D7BCED4AB8@btinternet.com>
To: public-webapi@w3.org

accessibility: vision as input device

in 2001 a group of students with severe learning disabilities tried  
out a pre-alpha version of the Sony eyetoy, with dramatic success.

has the SVGWG, WebAPIWG or others considered the potential  
accessibility benefits of enabling video capture as input device?

If so, can anyone contribute pointers or further suggestions to this  
list for an initial 'proof of concept' test suite:
cursor control via motion detection?
embedding video of the user in a local (SVG) context, such as a  
gaming or other environment?
embedding video of the user in a social (SVG) context?

I've read/seen for instance that the Wii can be controlled by up to 4  
fingers wrapped in silver foil....

'intelligent' developments to google's hybrid maps

How might the special properties of SVG be integrated?

to what extent are the specific and peculiar properties of human  
vision being incorporated in the SVG spec as our technical ability to  
mimic them is achieved?
eg colour gamut is not part of the svg1.1 spec
mapping of symbols to regions in real time?

is it possible at least, that our failure to engage with (SVG)  
authoring tool development is hindering  understanding of the process  
of representing vision?


Jonathan Chetwynd
Accessibility Consultant on Media Literacy and the Internet

might this for instance require the user to tune a 'blue screen' mask?

a bug outlining this concept has been filed with Opera, Safari and  
responses varied.


Received on Saturday, 29 December 2007 23:39:13 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:16:24 UTC