- From: John Glauert <J.Glauert@sys.uea.ac.uk>
- Date: Mon, 17 Feb 2003 16:37:25 +0000
- To: Johnb@screen.subtitling.com
- Cc: singer@apple.com, public-tt@w3.org
A lurker writes: At 15:50 +0000 2003-02-17, Johnb@screen.subtitling.com wrote: > > In both cases, I believe that the synchronization 'layup' can and >> should be represented at the level that provides that (e.g. SMIL). > >Wholeheartedly agree: BUT can SMIL represent such an >externally synchronised timeline (scenarion b) in an appropriate manner? > >1 hour of broadcast = 1000 subtitles (ROT). Each subtitle approx 10 words >(culturally dependent!). >For snake or add-on each word is timed individually. So 20,000 timings (in >cue and out cue). A new par-like construct could be designed where the second stream (TT) is locked to the first (video). The video can be stopped, started, rewound, played at variable speed, have ads inserted, etc. The corresponding parts of the TT stream are displayed in sync (cued in and cued out as required). This could probably be seen as syntactic sugar for "pure" SMIL, but would be much more compact since all the triggering is implicit. >John Birch -- John Glauert http://www.cmp.uea.ac.uk/People/jrwg eSIGN Project at UEA http://www.visicast.sys.uea.ac.uk
Received on Monday, 17 February 2003 11:36:59 UTC