- From: ~:'' ありがとうございました。 <j.chetwynd@btinternet.com>
- Date: Wed, 26 Dec 2007 08:51:44 +0000
- To: www-svg List <www-svg@w3.org>
accessibility: vision as input device in 2001 a group of students with severe learning disabilities tried out a pre-alpha version of the Sony eyetoy, with dramatic success. has the SVGWG or others considered the potential accessibility benefits of enabling video capture as input device? If so, can anyone contribute pointers or further suggestions to this list for an initial 'proof of concept' test suite: cursor control via motion detection? embedding video of the user in a local SVG context, such as a gaming or other environment? embedding video of the user in a social SVG context? I've read/seen for instance that the Wii can be controlled by up to 4 fingers wrapped in silver foil.... 'intelligent' developments to google's hybrid maps How might the special properties of SVG be integrated? to what extent are the specific and peculiar properties of human vision being incorporated in the SVG spec as our technical ability to mimic them is achieved? eg colour gamut is not part of the svg1.1 spec mapping of symbols to regions in real time? is it possible at least, that our failure to engage with SVG authoring tool development is hindering understanding of the process of representing vision? regards Jonathan Chetwynd Accessibility Consultant on Media Literacy and the Internet might this for instance require the user to tune a 'blue screen' mask? a bug outlining this concept has been filed with Opera, Safari and Mozilla. responses varied. http://www.peepo.co.uk/peepo2/authoringTool.html http://www.cs.cmu.edu/~johnny/projects/wii/
Received on Wednesday, 26 December 2007 08:52:03 UTC