- From: Eric Wescott <wescotte@gmail.com>
- Date: Tue, 23 Sep 2014 08:06:23 -0500
- To: public-audio@w3.org
- Message-ID: <5421704F.6010709@gmail.com>
I'm creating a subtitle creation tool <https://sourceforge.net/projects/quicksubtitles/> that allows the user to enter subtitles and sync them to video as it plays. I would like to add a feature that displays the next 5 or so seconds of audio data as a waveform to the user to help them fine tune the syncing process as well as anticipate when to set an in point. I've tried using an analyzer but it seems to be limited to 2048 samples which is about 120x too small. If I use an audioScript and listen for "audioprocess" events I can increase this to 16k but again it's way too small a buffer to be useful for my case. I've also experimented with creating the entire waveform up front <http://jsfiddle.net/daofw1g3/8/> which works but unfortunately has issues with large files or longer audio clips. I found if the video file is > 1GB the FIle API dispatches an "error" event when I readAsArrayBuffer(). So I thought perhaps I have to have the user isolate the audio from the but that has it's own issues as well. I tested it with a 80minute 35mb mp3 and while ArrayBuffer is created but then decodeAudioData() never runs it's callback function. I assume both cases involve a memory limit? Can anybody point me in the right direction on how I can access a large buffer of samples to provide the user with a 5 second or so waveform preview? Thanks Eric
Received on Tuesday, 23 September 2014 19:41:57 UTC