W3C home > Mailing lists > Public > whatwg@whatwg.org > April 2017

Re: [whatwg] <audio> metadata

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Mon, 24 Apr 2017 10:04:13 +1000
Message-ID: <CAHp8n2kYcqzY5fo6fZs1Dp19NY+nWgqA57ZmTQ1XanCpNkG-LQ@mail.gmail.com>
To: Kevin Marks <kevinmarks@gmail.com>
Cc: WHAT Working Group Mailing List <whatwg@whatwg.org>, Andy Valencia <ajv-cautzeamplog@vsta.org>
On Mon, Apr 24, 2017 at 5:04 AM, Kevin Marks <kevinmarks@gmail.com> wrote:
> On Sun, Apr 23, 2017 at 5:58 PM, Andy Valencia
> <ajv-cautzeamplog@vsta.org> wrote:
>> === Dynamic versus static metadata
>> Pretty much all audio formats have at least one metadata format.  While
>> some apparently can embed them at time points, this is not used by any
>> players I can find.  The Icecast/Streamcast "metastream" format is the
>> only technique I've ever encountered.  The industry is quickly shifting
>> to the so-called "Shoutcast v2" format due to:
>>     https://forums.developer.apple.com/thread/66586
>> Metadata formats as applied to static information are, of course, of
>> great interest.  Any dynamic technique should fit into the existing
>> approach.
> There are lots of models for dynamic metadata - look at soundcloud
> comments at times, youtube captions and overlays, Historically there
> have been chapter list markers in MPEG, QuickTime and mpeg4 (m4a, m4v)
> files too.

A different method is used on the Web for dynamic metadata: TextTracks
have been standardised to expose such time-aligned metadata. I don't
think this is the core of the discussion here though.

Received on Monday, 24 April 2017 00:05:05 UTC

This archive was generated by hypermail 2.3.1 : Monday, 24 April 2017 00:05:05 UTC