Re: Integer PCM sample formats to Web Audio API?

Swizzling is not a format conversion, it is a layout conversion. I
mentioned these in particular (actual arrangement). Are you referring to
something else?


On Tue, Jan 14, 2014 at 8:41 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Wed, Jan 15, 2014 at 5:01 PM, K. Gadd <kg@luminance.org> wrote:
>
>> The most obvious analogue here is the way textures work in OpenGL and
>> Direct3D. You allocate a texture of a particular size in a particular
>> format, and that's what you get. The GPU is certainly free to take
>> liberties with the actual arrangement of the texture in video memory (and
>> in fact, most do), but the format you ask for is (IIRC) always the format
>> you get. This is important because having extra format conversions
>> introduced at the driver's discretion could result in unprecedented
>> performance consequences or even behavioral differences (due to too
>> much/too little precision).
>>
>
> If only that was true :-).
> https://bugzilla.mozilla.org/show_bug.cgi?id=890950#c0
>
>
>> I don't see how audio could really differ dramatically in this area,
>> unless I've overlooked something important. I've love to see examples of
>> how audio is somehow special in this regard.
>>
>
> I think the main difference is that audio is generally much lower
> bandwidth/memory usage so conversions are more palatable. I know this is
> not universally true.
>
>  P.S. in graphics scenarios we've been relying heavily on compressed
>> storage of texel data in memory for over a decade, because it turns out we
>> never have enough memory to store all our data. Given that the size of
>> these float32 audiobuffers is problematic in reality for existing game
>> demos, perhaps it could be worthwhile to use efficient in-memory
>> compression for audio? It is certainly the case that lots of real-world
>> games do streaming decompression for some of their audio (i.e. music and
>> voiced dialogue) instead of decoding it up front into enormous buffers.
>> Note that I am not advocating for streaming *from storage*, I am advocating
>> for streaming *from memory*. The XBox 360 actually has support for this in
>> the southbridge, if memory serves.
>>
>
> You can create a Blob from a typed array and feed it to through an <audio>
> element connected to a MediaElementSourceNode. The Blob API does need to be
> extended with an option to take ownership of (neuter) the typed array data.
>
> Personally i support having decodeAudioData take an optional format
> argument. You'd set it to "float" when you know you're going to call
> getChannelData() anyway, and you'd set it to "short" when you want to
> minimize memory usage, but by default it's up to the UA. We need a
> parameter because the UA can't know which situation it's in. I'm guessing
> it makes sense for UAs to default to "short' in most situations but I could
> be wrong about that.
>
> Of course the rule still applies that the UA can do anything it wants
> that's observably equivalent to what you requested, so I don't support
> writing into the spec some sort of implementation requirement that that
> format actually be used internally, sorry.
>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w
>

Received on Wednesday, 15 January 2014 04:52:52 UTC