- From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- Date: Wed, 14 May 2014 22:43:20 +1000
- To: Aaron Colwell <acolwell@google.com>
- Cc: "Clift, Graham" <Graham.Clift@am.sony.com>, Pat Ladd <Pat_Ladd2@cable.comcast.com>, Bob Lund <b.lund@cablelabs.com>, "public-inbandtracks@w3.org" <public-inbandtracks@w3.org>
On Wed, May 14, 2014 at 9:32 PM, Aaron Colwell <acolwell@google.com> wrote: > On Wed, May 14, 2014 at 11:20 AM, Silvia Pfeiffer > <silviapfeiffer1@gmail.com> wrote: >> >> On Tue, May 13, 2014 at 7:34 AM, Clift, Graham <Graham.Clift@am.sony.com> >> wrote: >> > The problem I identified related to passing a caption service of type >> > line >> > 21 as an audio or video track >> >> If line21 captions are burnt into a video track, then exposing them as >> a video track makes sense. I don't understand how they would be >> exposed as an audio track. > > > Why does it make sense to expose captions data as a video track? Only where captions are actually burnt into video. > ISTM that > these should be exposed as TextTracks since this is actually text/metadata > right? Yes, where it's text and can be extracted, it should go into a TextTrack. > I would expect a video with line 21 data to create a VideoTrack for > the video and a TextTrack for the line21 caption data. That seems like the > most natural mapping to me. I don't know how the line21 data is encoded into MPEG - is it text or bitmaps or burnt-in? I agree, you should choose the most natural mapping. HTH, Silvia.
Received on Wednesday, 14 May 2014 12:44:07 UTC