- From: Nigel Megitt via GitHub <sysbot+gh@w3.org>
- Date: Mon, 18 Feb 2019 15:53:11 +0000
- To: public-web-and-tv@w3.org
nigelmegitt has just created a new issue for https://github.com/w3c/me-media-timed-events: == Add more drivers for better synchronisation == ยง3.4 currently mentions only video shot synchronisation as a driver for better synchronisation. There's another big one, which is to reduce the chance of wide variance in required reading rate (words per minute) from the authored subtitles. For a slightly edge case example, to demonstrate the point, say there are words coming in at 240 per minute, which are subtitles for a fast dialog program, i.e. one every quarter of a second on average. At this cadence, being 250ms late is 100% out of register. Yet guidelines such as the BBC's regarding [matching subtitle to speech onset](http://bbc.github.io/subtitle-guidelines/#Match-subtitle-to-speech-onset) note: > When two or more people are speaking, it is particularly important to keep in sync. Subtitles for new speakers must, as far as possible, come up as the new speaker starts to speak. Whether this is possible will depend on the action on screen and rate of speech. With the synchronisation capability as currently specified, the possibility also depends on the user agent implementation, which is something in our gift to influence. Please view or discuss this issue at https://github.com/w3c/me-media-timed-events/issues/36 using your GitHub account
Received on Monday, 18 February 2019 15:53:12 UTC