W3C home > Mailing lists > Public > public-tt@w3.org > February 2003

Re: Narration and Transcription

From: Gerry Field <gerry_field@wgbh.org>
Date: Fri, 07 Feb 2003 10:59:58 -0500
To: <Johnb@screen.subtitling.com>
CC: "public-tt@w3.org" <public-tt@w3.org>
Message-ID: <BA69422D.A6B2%gerry_field@wgbh.org>

Probably also important to note that in US NTSC (analog) and ATSC (DTT)
broadcasts, video description is delivered as a complete audio service,
containing the full program audio plus description.

SAP on analog broadcast, separate audio pid on ATSC-DTT.

In ATSC PSIP, metadata about each audio pid is provided in the
"AC3_descriptor", located in the EIT and PMT tables.

And, getting back to the captioning/subtitle discussion, in ATSC PSIP,
metadata about caption services is provided in the
"caption_service_descriptor" in the EIT (required) and PMT (optional)

Since it seems to me the timed text work could have application in authoring
and contribution for broadcast systems, it would be useful to be aware of
the structures of these descriptors, and the fields of data each requires.


On 2/7/03 10:10 AM, "Johnb@screen.subtitling.com"
<Johnb@screen.subtitling.com> wrote:

> FYI: 
> In the UK the BBC currently transmit a proportion of their programming on DTT
> with Description (as audio).
> In the UK this is called audio description

I'll amend my grammar to:

> In the US, when applied to television broadcast or other media, the generally
> accepted term is "video description". This is the term used by Congress and
> the Federal Communications Commission. 
Received on Friday, 7 February 2003 11:14:30 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 5 October 2017 18:23:58 UTC