W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2015

Re: Input buffer's time coordinate

From: Raymond Toy <rtoy@google.com>
Date: Mon, 2 Nov 2015 13:06:11 +0900
Message-ID: <CAE3TgXE717Wa6tHDnpupHw8cfCffV-Ce_yeQak+U8NsStkqrvw@mail.gmail.com>
To: Akira Sawada <aklaswad@gmail.com>
Cc: s p <sebpiq@gmail.com>, "public-audio@w3.org" <public-audio@w3.org>
This has kind of drifted off-topic for the public-audio list, which is
about the spec itself.  You should try public-audio-dev list instead.

But for your particular case, why not use an offline audiocontext with an
AudioBufferSource node with a constant signal followed by a gain node?
This is how the automation diagram in the spec
<http://webaudio.github.io/web-audio-api/#example1-AudioParam> was
produced.  The source for the diagram is
https://googlechrome.github.io/web-audio-samples/samples/audio/timeline.html
.

On Mon, Nov 2, 2015 at 12:13 AM, Akira Sawada <aklaswad@gmail.com> wrote:

> Hi Sébastien,
>
> Thanks for the advice!
> ChannelMerger was completely out of my sight! Yes, your code looks that I
> want.
> Will try this idea, and also tackle more simple way (e.g. use
> param.linearRampToValueAtTime(LENGTH_OF_A_DAY, LENGTH_OF_A_DAY) to generate
> timestamp instead ).
>
> P.S.
> I'm also feeling sad to know there's no way to do this in non hacky way ;)
>
> thanks so much!
>
> Akira
>
>
> 2015-11-02 0:10 GMT+09:00 s p <sebpiq@gmail.com>:
>
>> Hi Akira,
>>
>> I had the same problem a while ago. For this I find a hack, which works
>> well in Chrome but not in Firefox (at the time there was a bug I think with
>> ChannelMergerNode which was causing a problem there, but now maybe it is
>> fixed?). You can find the code there :
>> https://github.com/WhojamLab/WAARecorderNode/blob/master/lib/TimeTaggedScriptProcessorNode.js
>>
>> The basic idea is to use a BufferSourceNode playing a ramp - which is
>> basically your timeline - and merge that audio signal with the signal you
>> want to send to your ScriptProcessorNode. That way you both the signal you
>> are interested in AND the timeline which gives you a precise time tag for
>> each sample of your signal.
>>
>> I could also package this as a separate library, but considering that
>> things are going to change soon enough, and ScriptProcessorNode will
>> disappear, and these problems of unpredictable latency should disappear
>> with it (will they?)
>>  ... it is probably not worth ...
>>
>> Cheers
>>
>> On Sun, Nov 1, 2015 at 7:26 AM, Akira Sawada <aklaswad@gmail.com> wrote:
>>
>>> Hi group,
>>>
>>> I'm writing an audioParam automation editor (
>>> http://aklaswad.github.io/alligata/envelope/ ) and using
>>> scriptProcessor as simple peek meter in it.
>>>
>>> So, I want to know the exact time the samples in inputBuffer was
>>> processed for.
>>> Those input samples could have an information about context based time
>>> coordinate because they're also processed on top of the time coordinate.
>>>
>>> Is there a way to know that?
>>>
>>> thanks.
>>>
>>> --
>>> 澤田 哲 / Akira Sawada / aklaswad
>>>
>>> email me: aklaswad@gmail.com
>>> visit me: http://blog.aklaswad.com/
>>>
>>
>>
>>
>> --
>>
>> *Sébastien Piquemal*
>>
>>  -----* @sebpiq*
>>  ----- http://github.com/sebpiq
>>  ----- http://funktion.fm
>>
>
>
>
> --
> 澤田 哲 / Akira Sawada / aklaswad
>
> email me: aklaswad@gmail.com
> visit me: http://blog.aklaswad.com/
>
Received on Monday, 2 November 2015 04:06:40 UTC

This archive was generated by hypermail 2.3.1 : Friday, 18 December 2015 09:00:35 UTC