Re: Input buffer's time coordinate

> For an AudioWorker, I would think your originalPlaybackTime is the
context currentTime.  Not sure what we're doing with this currently.

Agree with this, and it would be nice if this was clearly written in the
spec ;)

> Really neat demo, BTW.  We could have used something like this last week
at TPAC.

Thanks! I also was at the meet-up ( I was not at TPAC itself though ), yt?
Thank you all for the fun time!
And afaik you missed my another webaudio precompiler demo[1] performed at
the almost end of meet up. ... oops, off-topic again ;)

thanks

Akira

[1] http://aklaswad.github.io/alligata/dsp-circuit-compiler.html



2015-11-03 3:44 GMT+09:00 Raymond Toy <rtoy@google.com>:

>
>
> On Mon, Nov 2, 2015 at 12:30 AM, Akira Sawada <aklaswad@gmail.com> wrote:
>
>> Hi Raymond,
>>
>> Am sorry my question sounds only about app implementation. I also just
>> wondering that AudioProcessing ( and current draft for audioWorker's
>> AudioProcess ) Event should have some information about original playback
>> time. `e.originalPlaybackTime` is a missing feature for me, I don't know
>> enough use cases are in the world, though.
>>
>
> For an AudioWorker, I would think your originalPlaybackTime is the context
> currentTime.  Not sure what we're doing with this currently.
>
>
>>
>>
>> And, the reason why I don't use offlineAudioContext is very clear. Chrome
>> has a bug around thread pool and crashes when many (around 1024 or 2048?)
>> offlineAudioContext has created.
>> https://code.google.com/p/chromium/issues/detail?id=433479#c3
>>
>
> Oops.  I forgot about that one.  Yes, it should be fixed.  But do you
> really expect to create 1000 contexts?  I played around with your example
> and it looks like you actually draw the curves as you drag the points
> around but you don't actually have the actual output until you finish
> dragging.  That would only be one context after each drag.  I played around
> for a bit and probably would have only generated 10-20 contexts.
>
> But yeah, I understand if you don't want to crash the browser.
>
> Really neat demo, BTW.  We could have used something like this last week
> at TPAC.
>
>
>>
>> Sébastien,
>>
>> JFYI, I implemented the timecode channel as you suggested, and it works
>> very well, on both chrome and firefox. thanks Again!
>>
>>
>> Akira
>>
>> 2015-11-02 13:06 GMT+09:00 Raymond Toy <rtoy@google.com>:
>>
>>> This has kind of drifted off-topic for the public-audio list, which is
>>> about the spec itself.  You should try public-audio-dev list instead.
>>>
>>> But for your particular case, why not use an offline audiocontext with
>>> an AudioBufferSource node with a constant signal followed by a gain node?
>>> This is how the automation diagram in the spec
>>> <http://webaudio.github.io/web-audio-api/#example1-AudioParam> was
>>> produced.  The source for the diagram is
>>> https://googlechrome.github.io/web-audio-samples/samples/audio/timeline.html
>>> .
>>>
>>> On Mon, Nov 2, 2015 at 12:13 AM, Akira Sawada <aklaswad@gmail.com>
>>> wrote:
>>>
>>>> Hi Sébastien,
>>>>
>>>> Thanks for the advice!
>>>> ChannelMerger was completely out of my sight! Yes, your code looks that
>>>> I want.
>>>> Will try this idea, and also tackle more simple way (e.g. use
>>>> param.linearRampToValueAtTime(LENGTH_OF_A_DAY, LENGTH_OF_A_DAY) to generate
>>>> timestamp instead ).
>>>>
>>>> P.S.
>>>> I'm also feeling sad to know there's no way to do this in non hacky way
>>>> ;)
>>>>
>>>> thanks so much!
>>>>
>>>> Akira
>>>>
>>>>
>>>> 2015-11-02 0:10 GMT+09:00 s p <sebpiq@gmail.com>:
>>>>
>>>>> Hi Akira,
>>>>>
>>>>> I had the same problem a while ago. For this I find a hack, which
>>>>> works well in Chrome but not in Firefox (at the time there was a bug I
>>>>> think with ChannelMergerNode which was causing a problem there, but now
>>>>> maybe it is fixed?). You can find the code there :
>>>>> https://github.com/WhojamLab/WAARecorderNode/blob/master/lib/TimeTaggedScriptProcessorNode.js
>>>>>
>>>>> The basic idea is to use a BufferSourceNode playing a ramp - which is
>>>>> basically your timeline - and merge that audio signal with the signal you
>>>>> want to send to your ScriptProcessorNode. That way you both the signal you
>>>>> are interested in AND the timeline which gives you a precise time tag for
>>>>> each sample of your signal.
>>>>>
>>>>> I could also package this as a separate library, but considering that
>>>>> things are going to change soon enough, and ScriptProcessorNode will
>>>>> disappear, and these problems of unpredictable latency should disappear
>>>>> with it (will they?)
>>>>>  ... it is probably not worth ...
>>>>>
>>>>> Cheers
>>>>>
>>>>> On Sun, Nov 1, 2015 at 7:26 AM, Akira Sawada <aklaswad@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi group,
>>>>>>
>>>>>> I'm writing an audioParam automation editor (
>>>>>> http://aklaswad.github.io/alligata/envelope/ ) and using
>>>>>> scriptProcessor as simple peek meter in it.
>>>>>>
>>>>>> So, I want to know the exact time the samples in inputBuffer was
>>>>>> processed for.
>>>>>> Those input samples could have an information about context based
>>>>>> time coordinate because they're also processed on top of the time
>>>>>> coordinate.
>>>>>>
>>>>>> Is there a way to know that?
>>>>>>
>>>>>> thanks.
>>>>>>
>>>>>> --
>>>>>> 澤田 哲 / Akira Sawada / aklaswad
>>>>>>
>>>>>> email me: aklaswad@gmail.com
>>>>>> visit me: http://blog.aklaswad.com/
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> *Sébastien Piquemal*
>>>>>
>>>>>  -----* @sebpiq*
>>>>>  ----- http://github.com/sebpiq
>>>>>  ----- http://funktion.fm
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> 澤田 哲 / Akira Sawada / aklaswad
>>>>
>>>> email me: aklaswad@gmail.com
>>>> visit me: http://blog.aklaswad.com/
>>>>
>>>
>>>
>>
>>
>> --
>> 澤田 哲 / Akira Sawada / aklaswad
>>
>> email me: aklaswad@gmail.com
>> visit me: http://blog.aklaswad.com/
>>
>
>


-- 
澤田 哲 / Akira Sawada / aklaswad

email me: aklaswad@gmail.com
visit me: http://blog.aklaswad.com/

Received on Wednesday, 4 November 2015 02:01:31 UTC