- From: <bugzilla@jessica.w3.org>
- Date: Thu, 06 Jun 2013 07:02:18 +0000
- To: public-html-a11y@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=10941 Silvia Pfeiffer <silviapfeiffer1@gmail.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Component|pre-LC1 HTML5 spec (editor: |HTML5 spec |Ian Hickson) | Assignee|silviapfeiffer1@gmail.com |dave.null@w3.org --- Comment #15 from Silvia Pfeiffer <silviapfeiffer1@gmail.com> --- Update with some new information. There are several approaches possible here: 1. The Web Speech API make speech synthesis an integral part of Web browsers [1]. That API (once implemented) can synthesize <track>s of @kind=descriptions , thus JS developers can implement support for description tracks, including pausing the video and restarting when the voicing is finished. Once we have some experience implementing description support in this way, we may include this functionality in browsers and add any additional features that this requires, such as events to raise and states to report. 2. As for how to do this with audio-only descriptions and video, I'd like to see some implemented examples in JS first before we even attempt to make the browser auto-pause video when audio files are past certain times etc. 3. I can imagine an implementation with a <video> element and a WebVTT track that has speech phrases instead of text in the cues (e.g. in a data-uri). That would essentially lead back to the same need as in 1. where pausing and resume is determined by the duration of a cue and the time it takes to finish the audio. [1] https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#tts-section -- You are receiving this mail because: You are on the CC list for the bug.
Received on Thursday, 6 June 2013 07:02:46 UTC