- From: Glenn Maynard <glenn@zewt.org>
- Date: Thu, 26 Apr 2012 18:43:00 -0500
- To: Ian Hickson <ian@hixie.ch>
- Cc: "public-texttracks@w3.org" <public-texttracks@w3.org>
- Message-ID: <CABirCh_mEEP0eSgsgdpxF7E3f_kkv+b68Ycj0MMCVoDzDA9nyA@mail.gmail.com>
On Thu, Apr 26, 2012 at 5:54 PM, Ian Hickson <ian@hixie.ch> wrote: > I do not believe that it will. Mere tutorial-level information will be > sufficient for this kind of thing. That is, whatever mechanism people use > to learn the language will be fine. > People aren't likely to read tutorials for VTT when they've already comfortable with SRT. It's not one in a few thousand. > One in a few thousand is generous; I can't even think of a single example. The specific example above is a single bitmap from a DVD caption track. > (First hit on Google for that cue's text identifies the source correctly. > It happened to be what I had playing -- with captions enabled, > coincidentally -- when I replied to the earlier e-mail.) > (Bitmap DVD captions, from what I vaguely recall--it's been a decade or so since I wrote a decoder for them--only *support* showing a single bitmap at a time. This has no connection to how things should be authored in VTT.) I see no reason to suggest that authors should use two WebVTT cues for > such a case. On the contrary, I think such cases are quite common. > They're lines spoken by two different people at different times. Why would you put them in the same caption? The only way that could even be done would be with a timestamp tag, which just seems pointless and harder to edit. -- Glenn Maynard
Received on Thursday, 26 April 2012 23:43:24 UTC