W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: Sync lost when seeking

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Sat, 17 Aug 2013 20:49:56 +0300
Message-ID: <CAJhzemWMvBoYJvv--TYri0W5MwWZ9DtOHwzHC38dQJgrsD2Wnw@mail.gmail.com>
To: Eduardo Bouças <mail@eduardoboucas.com>
Cc: Josh Nielsen <josh@joshontheweb.com>, "public-audio@w3.org" <public-audio@w3.org>
Hi Eduardo,

On Mon, Aug 12, 2013 at 10:06 PM, Eduardo Bouças <mail@eduardoboucas.com>wrote:

> Josh,
>
> Thanks a lot for your reply. I will browse that code thoroughly, but let
> me just ask you a few questions about the audio buffers. The specification
> says that it would be expected that the loaded sounds would be fairly short
> (less than one minute), saying that longer sounds should be loaded using
> HTML5 audio elements.
>

The spec gives good advice here. On a good desktop computer, this should
not be a problem most of the time, but good desktop computer is not a
reality most of the time. One minute of audio loaded into the memory is 4
(web audio represents samples as 32 bit floats, so 4 bytes) * 60 (seconds)
* 48000 (the audio sample rate per second) * 2 (the amount of channels in
most setups), so about 22MB, which doesn't seem like much, but if you have
complete songs loaded into the memory in for example a tracker this will
quickly pile up to an amount that will exceed the user's available memory.

My suggestion is that you file a bug against the browser where this
behavior occurs. I suspect this is due to the decoder being "lazy" with the
seeking, i.e. as most audio formats store audio in frames, the seeking only
goes to the nearest frame instead of being sample accurate which gives you
unpredictable results with syncing. This shouldn't be a very difficult
problem to fix on the decoder side though.

Do you have any performance problems when loading multiple tracks
> containing longer sounds? What do you think would be the limit in number of
> tracks VS. track length?
> Also, the documentation says that start() and stop() methods should be
> used just once. Was this an issue? (I'm pretty sure I will answer this
> question myself as soon as I look through the code).
>

You can reuse the same AudioBuffer instance for multiple different
AudioBufferSources that handle the actual playback.

Cheers,
Jussi


> Thanks again,
> --
> Eduardo Bouças
>
>
> On Mon, Aug 12, 2013 at 12:33 AM, Josh Nielsen <josh@joshontheweb.com>wrote:
>
>> Eduardo,
>> I built something similar for Soundkeep. You can see an example at
>> http://soundkeep.com/joshontheweb/dah.  We had endless issues trying to
>> sync the audio using the audio tags for playback.  I recommend not using
>> audio tags and decoding all the audio data yourself and using buffer and
>> source nodes for playback.  The code is unminified at the moment and you
>> should be able to get an idea of how it works if you browse the track.js
>> and track_view.js files.
>>
>>
>> On Fri, Aug 9, 2013 at 2:59 PM, Eduardo Bouças <mail@eduardoboucas.com>wrote:
>>
>>> Hi everyone,
>>>
>>> As a final project for my masters degree in Web Development, I'm
>>> developing a collaborative audio recording platform for musicians
>>> (something like a cloud DAW married with GitHub).
>>> In a nutshell, a session (song) is made of a series of audio tracks,
>>> encoded in AAC and played through HTML5 <audio> elements. Each track is
>>> connected to the Web Audio API through a MediaElementAudioSourceNode and
>>> routed through a series of nodes (gain and pan, at the moment) until the
>>> destination. So far so good. I am able to play them in sync, pause, stop
>>> and seek with no problems at all, and successfully implemented the usual
>>> mute, solo functionalities of the common DAW, as well as waveform
>>> visualization and navigation. This is the playback part.
>>>
>>> As for the recording part, I connected the output from getUserMedia() to
>>> a MediaStreamAudioSourceNode, which is then routed to a ScriptProcessorNode
>>> that writes the recorded buffer to an array, using a web worker — I had to
>>> come up with a sort of delay compensation mechanism, because I was getting
>>> a slight latency when playing back the recorded audio.
>>> When the recording process ends, the recorded buffer is written into a
>>> PCM wave file and uploaded to the server, but at the same time hooked up to
>>> a <audio> element for immediate playback (otherwise I would have to wait
>>> for the wav file to be uploaded to the server to be available). Here is the
>>> problem: I can play the recorded track in perfect sync with the previous
>>> ones, but I can't seek properly. If I change the currentTime property of
>>> the newly recorded track, it becomes messy and terribly out of sync.
>>>
>>> Does anyone have any idea of what may be causing this? Is there any
>>> other useful information I can provide?
>>>
>>> Thank you in advance and congratulations for your wonderful effort of
>>> bringing audio to the web.
>>>
>>> --
>>> Eduardo Bouças
>>>
>>
>>
>>
>> --
>> Thanks,
>> Josh Nielsen
>> @joshontheweb <http://twitter.com/joshontheweb>
>> joshontheweb.com
>>
>
>
Received on Saturday, 17 August 2013 17:50:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:10 UTC