[whatwg] Codecs for <audio> and <video>

Do you have an idea on how to introduce fall back support for browsers that
don't even support <canvas>, how will they be expected to implement a base64
string when they skip the element's attributes?
Might a <img> tag work with the src="" set to the same string as the base64?
 Or would that contradict the point of allowing <video>, <audio>, and
<canvas> the extra abilities?  Because if people can just use
the <img> tag which is more comfortable to them, why would they feel
the urge to switch?

On Thu, Jul 2, 2009 at 8:51 PM, Charles Pritchard <chuck at jumis.com> wrote:

> I'd like to see some progress on these two tags.
>
> I'd like people to consider that Vorbis can be implemented
> in virtual machines (Java, Flash) which support raw PCM data.
> Theora is no different.
>
> I'd like to see <canvas> support added to the <video> tag (it's as natural
> as <img>).
> and enable the <audio> tag to accept raw data lpcm),
> just as the <canvas> tag accepts raw data (bitmap).
>
> Then you can support any codec you create, as well as use system codecs.
>
> You can't make the impossible happen (no HD video on an old 300mhz
> machine),
> but you'd have the freedom to do the improbable.
>
> Add raw pcm and sound font support to <audio>,
> add raw pixel support to <video> (via CanvasRenderingContext2D).
>
> And add an event handler when subtitles are enabled / disabled.
>
> I have further, more specific comments, below.
> and at the end of the e-mail, two additions to the standard.
>
>  Ian Hickson wrote:
>> I understand that people are disappointed that we can't require Theora
>> support. I am disappointed in the lack of progress on this issue also.
>>
>>
>> On Tue, 30 Jun 2009, Dr. Markus Walther wrote:
>>
>>
>>> Having removed everything else in these sections, I figured there wasn't
>>>> that much value in requiring PCM-in-Wave support. However, I will continue
>>>> to work with browser vendors directly and try to get a common codec at least
>>>> for audio, even if that is just PCM-in-Wave.
>>>>
>>>>
>>>
> I'd think that FLAC would make more sense than PCM-in-Wave,
> as a PNG analog.
>
> Consider the <canvas> element. PNG implementations may be broken.
> Internally, <canvas> accepts a raw byte array, a 32 bit bitmap, and
> allows a string-based export of a compressed bitmap,
> as a base64 encoded 32 bit png.
>
> The <audio> element should accept a raw byte array, 32 bit per sample lpcm,
> and allow a similar export of base64 encoded file, perhaps using FLAC.
>
> Canvas can currently be used to render unsupported image formats (and
> mediate unsupported image containers),
> it's been proven with ActionScript that a virtual machine can also support
> otherwise unsupported audio codecs.
>
> I'd like to see a font analog in audio as well. Canvas supports the font
> attribute,
> audio could certainly support sound fonts. Use a generated pitch if your
> platform can't or doesn't store sound fonts.
>
>
>  Please, please do so - I was shocked to read that PCM-in-Wave as the
>>> minimal 'consensus' container for audio is under threat of removal, too.
>>>
>>>
>> There seems to be some confusion between codecs and containers.
> WAV, OGG, AVI and MKV are containers, OSC is another.
>
> Codecs are a completely separate matter.
>
> It's very clear that Apple will not distribute the Vorbis and Theora codecs
> with their software packages.
>
> It's likely that Apple would like to use a library they don't have to
> document,
> as required by most open source licenses, and they see no current reason to
> invest
> money into writing a new one. Apple supports many chipsets, and many
> content
> agreements, it would be costly.
>
> I see no reason why Apple could not support the OGG container.
> That said, I see no reason why a list of containers needs to be in the HTML
> 5 spec.
>
>  On Thu, 2 Jul 2009, Charles Pritchard wrote:
>>
>>
>>> Can the standard simply address video containers (OGG, MKV, AVI) ?
>>> Each container is fairly easy to implement and codecs can be identified
>>> within
>>> the container.
>>> Vendors can decide on their own what to do with that information.
>>>
>>>
>>
>> The spec does document how to distinguish containers via MIME type. Beyond
>> that I'm not sure what we can do.
>>
>> <video> does support fallback, so in practice you can just use Theora and
>> H.264 and cover all bases.
>>
>>
>
> I'd like to see this added to <audio> and <video>:
>
> "User agents should provide controls to enable the manual selection of
> fallback content."
>
> "User agents should provide an activation behavior, when fallback content
> is required, detailing why the primary content could not be used."
>
> Many non-technical users will want to know why there is a black screen (or
> still image), even though they can hear the audio.
>
>
> -Charles
>
>
>


-- 
- Adam Shannon ( http://ashannon.us )
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090702/6b7d4ec9/attachment-0001.htm>

Received on Thursday, 2 July 2009 21:14:13 UTC