Re: How to play back synthesized 22kHz audio in a glitch-free manner?

Start() is already defined as sample accurate. I think the main issue here is the stitching together of resampled buffers.

I'd like to point out that looping of resampled buffers with variable sample rates is glitch free and it seems reasonable that general concatenation should work at least as well as looping.

Note that if you have resampling broken out, this creates difficulties regarding the way time units and sample rates operate upstream from the resampler node. This problem of mixing sample rates in the same audio context has come up before on the list and it was dismissed, I think. I don't have a cite handy for that discussion. 

.            .       .    .  . ...Joe

Joe Berkovitz
President
Noteflight LLC
+1 978 314 6271
www.noteflight.com
"Your music, everywhere."

On Jun 17, 2013, at 6:15 PM, Kevin Gadd <kevin.gadd@gmail.com> wrote:

> Could one simply define a ResamplerNode/PlaybackRateAdjustmentNode? Then, in cases where you want to stitch together smaller buffers and adjust the playback rate of all of them, you give them all the resampler node as a shared destination.
> 
> This would allow removing the .playbackRate attribute of AudioBufferSourceNode entirely, and it would probably be more generally useful anyway - for example, resampling ScriptProcessorNode outputs entirely, adjusting the playback rate of audio from an <audio> element, etc. I'd argue that such a change would have a good symmetry with the removal of .gain and provide benefits for developers.
> 
> Separate from this, though, we still ultimately need a way to schedule buffers in a sample-precise manner - whether it's changes to the definition of start()/etc in order to enable sample-precise start times, or a startImmediatelyAfter method. But splitting playback rate adjustment out would at least let people realistically use ScriptProcessorNode in these scenarios, which would be great!
> 
> -kg
> 
> 
> On Mon, Jun 17, 2013 at 2:36 PM, Robert O'Callahan <robert@ocallahan.org> wrote:
>> On Tue, Jun 18, 2013 at 7:25 AM, Jukka Jylänki <jujjyl@gmail.com> wrote:
>>> If the Web Audio API had an explicit support for buffer queueing/stitching with AudioBufferSourceNodes, and the user could give that contract to the Web Audio impl with the 'startImmediatelyAfter' function, then the implementation could perform audio resampling on the stream as a whole, and not to discontinuous source nodes individually.
>> 
>> Only if they have the same set of destinations. I suppose that could be done but it's not trivial. Then again, it would solve use cases for which ScriptProcessorNode is not a very good fit.
>> 
>> Rob
>> -- 
>> Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e   tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w
> 

Received on Monday, 17 June 2013 22:33:30 UTC