- From: Bert Bos <bert@w3.org>
- Date: Thu, 29 Sep 2011 22:52:27 +0200
- To: W3C style mailing list mailing list <www-style@w3.org>
- Cc: Robert Brown <Robert.Brown@microsoft.com>
Begin forwarded message: > From: Robert Brown <Robert.Brown@microsoft.com> > Date: September 29, 2011 19:15:32 GMT+02:00 > Subject: RE: Reminder: deadline for last call comments on css3-speech > > Thanks for the nudge Bert, > > A few questions come to mind: > 1. Other than screen reading, what other use cases are there for implementing the speech synthesis component of a webapp's user interface as style attributes ? > 2. If screen reading is the key scenario, who is the target user? I can't speak on behalf of the visually impaired, but feedback I've heard in the past is that the ability for the user to explicitly select the TTS voice and playback speed is highly desirable in this scenario. > 3. How is the user envisaged to interact with a webapp that uses this capability? For example, how do they interrupt to select a recently spoken element (e.g. to select an item from a list)? Does the webapp have any shuttle control (pause/resume, skip forward/back, etc), or is that exclusively provided by the UA? > 4. How is the playback of rendered speech coordinated with the visual display? For example, it's common for words or groups of words to be highlighted as they're spoken (presumably by applying a different style). > 5. I'm curious to know what user agents are actively interested in implementing this? > > (I'm involved in the htmlspeech group, but don't speak on their behalf) > > /Rob > > (Robert Brown, Microsoft) Bert -- Bert Bos ( W 3 C ) http://www.w3.org/ http://www.w3.org/people/bos W3C/ERCIM bert@w3.org 2004 Rt des Lucioles / BP 93 +33 (0)4 92 38 76 92 06902 Sophia Antipolis Cedex, France
Received on Thursday, 29 September 2011 20:53:00 UTC