- From: Marja-Riitta Koivunen <marja@w3.org>
- Date: Thu, 16 Jan 2003 15:57:52 -0500
- To: "Mark D. Urban" <docurban@nc.rr.com>, <public-wai-rd@w3.org>
Here are some accessibility questions related to video conferencing tools that come to my mind: 1) How to help people who don't see to be aware that the camera is on. How to help them understanding what is in the image from other participants maybe with some image analyzer program. 2) How to help people who don't see when they are in the camera view as they are supposed to be e.g. their face is presented but not much else. This could naturally be done automatically by the camera or the user could get help from an image analyzer program in adjusting it the right way. The user could also say that the camera may be adjustable by the participants but only face or keyboard may be shown (at least with kids teachers sometimes want to see what they are doing when they are talking to them). 3) How to help people who cannot hear in getting larger view of the mouth area to read lips. 4) What kinds of support is available to automatically transcribe the spoken language in addition to maybe other participants doing it manually in IRC. The best way to combine the different information sources. I wonder if there is some research/software related to these or other accessibility questions. Marja
Received on Thursday, 16 January 2003 15:58:10 UTC