- From: Eduardo Bouças <mail@eduardoboucas.com>
- Date: Mon, 12 Aug 2013 20:06:38 +0100
- To: Josh Nielsen <josh@joshontheweb.com>
- Cc: "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CAGTW=VFuMUmVYW=pDBjQ3+NpjZRpXZCYp_Gced21D8gkSU_deA@mail.gmail.com>
Josh, Thanks a lot for your reply. I will browse that code thoroughly, but let me just ask you a few questions about the audio buffers. The specification says that it would be expected that the loaded sounds would be fairly short (less than one minute), saying that longer sounds should be loaded using HTML5 audio elements. Do you have any performance problems when loading multiple tracks containing longer sounds? What do you think would be the limit in number of tracks VS. track length? Also, the documentation says that start() and stop() methods should be used just once. Was this an issue? (I'm pretty sure I will answer this question myself as soon as I look through the code). Thanks again, -- Eduardo Bouças On Mon, Aug 12, 2013 at 12:33 AM, Josh Nielsen <josh@joshontheweb.com>wrote: > Eduardo, > I built something similar for Soundkeep. You can see an example at > http://soundkeep.com/joshontheweb/dah. We had endless issues trying to > sync the audio using the audio tags for playback. I recommend not using > audio tags and decoding all the audio data yourself and using buffer and > source nodes for playback. The code is unminified at the moment and you > should be able to get an idea of how it works if you browse the track.js > and track_view.js files. > > > On Fri, Aug 9, 2013 at 2:59 PM, Eduardo Bouças <mail@eduardoboucas.com>wrote: > >> Hi everyone, >> >> As a final project for my masters degree in Web Development, I'm >> developing a collaborative audio recording platform for musicians >> (something like a cloud DAW married with GitHub). >> In a nutshell, a session (song) is made of a series of audio tracks, >> encoded in AAC and played through HTML5 <audio> elements. Each track is >> connected to the Web Audio API through a MediaElementAudioSourceNode and >> routed through a series of nodes (gain and pan, at the moment) until the >> destination. So far so good. I am able to play them in sync, pause, stop >> and seek with no problems at all, and successfully implemented the usual >> mute, solo functionalities of the common DAW, as well as waveform >> visualization and navigation. This is the playback part. >> >> As for the recording part, I connected the output from getUserMedia() to >> a MediaStreamAudioSourceNode, which is then routed to a ScriptProcessorNode >> that writes the recorded buffer to an array, using a web worker — I had to >> come up with a sort of delay compensation mechanism, because I was getting >> a slight latency when playing back the recorded audio. >> When the recording process ends, the recorded buffer is written into a >> PCM wave file and uploaded to the server, but at the same time hooked up to >> a <audio> element for immediate playback (otherwise I would have to wait >> for the wav file to be uploaded to the server to be available). Here is the >> problem: I can play the recorded track in perfect sync with the previous >> ones, but I can't seek properly. If I change the currentTime property of >> the newly recorded track, it becomes messy and terribly out of sync. >> >> Does anyone have any idea of what may be causing this? Is there any other >> useful information I can provide? >> >> Thank you in advance and congratulations for your wonderful effort of >> bringing audio to the web. >> >> -- >> Eduardo Bouças >> > > > > -- > Thanks, > Josh Nielsen > @joshontheweb <http://twitter.com/joshontheweb> > joshontheweb.com >
Received on Monday, 12 August 2013 19:07:36 UTC