[mediacapture-main] Proposal: DecodeConcatVideoData (input multiple files or streams) => Output: single webm file (#575)

guest271314 has just created a new issue for https://github.com/w3c/mediacapture-main:

== Proposal: DecodeConcatVideoData (input multiple files or streams) => Output: single webm file ==
**Proposal: DecodeConcatVideoData (input multiple files or streams) => Output: single webm file**

Web Audio API provides the ability to [`decodeAudioData`](https://webaudio.github.io/web-audio-api/#dom-baseaudiocontext-decodeaudiodata) where the result is a single [`AudioBuffer`](https://webaudio.github.io/web-audio-api/#audiobuffer). `AudioBuffer`s can be concatenated into a single `AudioBuffer`, see [merging / layering multiple ArrayBuffers into one AudioBuffer using Web Audio API](https://stackoverflow.com/a/18920291)

> I'm not sure I totally understand your scenario - don't you want these to be playing simultaneously? (i.e. bass gets layered on top of the drums).
> 
> Your current code is trying to concatenate an additional audio file whenever you hit the button for that file. You can't just concatenate audio files (in their ENCODED form) and then run it through decode - the decodeAudioData method is decoding the first complete sound in the arraybuffer, then stopping (because it's done decoding the sound).
> 
> What you should do is change the logic to concatenate the buffer data from the resulting AudioBuffers (see below). Even this logic isn't QUITE what you should do - this is still caching the encoded audio files, and decoding every time you hit the button. Instead, you should cache the decoded audio buffers, and just concatenate it.
> 
> ```
> function startStop(index, name, isPlaying) {
> 
>     // Note we're decoding just the new sound
>     context.decodeAudioData( bufferList[index], function(buffer){
>         // We have a decoded buffer - now we need to concatenate it
>         audioBuffer = buffer;
> 
>         if(!audioBuffer) {
>             audioBuffer = buffer;
>         }else{
>             audioBuffer = concatenateAudioBuffers(audioBuffer, buffer);
>         }
> 
>         play();
>     })
> }
> 
> function concatenateAudioBuffers(buffer1, buffer2) {
>     if (!buffer1 || !buffer2) {
>         console.log("no buffers!");
>         return null;
>     }
> 
>     if (buffer1.numberOfChannels != buffer2.numberOfChannels) {
>         console.log("number of channels is not the same!");
>         return null;
>     }
> 
>     if (buffer1.sampleRate != buffer2.sampleRate) {
>         console.log("sample rates don't match!");
>         return null;
>     }
> 
>     var tmp = context.createBuffer(buffer1.numberOfChannels, buffer1.length + buffer2.length, buffer1.sampleRate);
> 
>     for (var i=0; i<tmp.numberOfChannels; i++) {
>         var data = tmp.getChannelData(i);
>         data.set(buffer1.getChannelData(i));
>         data.set(buffer2.getChannelData(i),buffer1.length);
>     }
>     return tmp;
> };
> ```

which can be played back.

Concatenating multiple `MediaStream`s into a single resulting `webm` file using `MediaRecorder` is not necessarily straightforward, though is possible using `canvas.captureStream()` and `AudioContext.createMediaStreamDestination()` and `MediaRecorder()`, see [MediaStream Capture Canvas and Audio Simultaneously
](https://stackoverflow.com/a/39302994); [How to use Blob URL, MediaSource or other methods to play concatenated Blobs of media fragments?](https://stackoverflow.com/a/45343042); https://github.com/guest271314/MediaFragmentRecorder/blob/canvas-webaudio/MediaFragmentRecorder.html; 
 and/or `MediaSource()` https://github.com/guest271314/MediaFragmentRecorder/tree/master (there is a Chromium bug using this approach, see https://github.com/w3c/media-source/issues/190).

This proposal is for an API which accepts an `Array` of either multiple static files (potentially having different encodings/file extensions) and/or `MediaStream`s and outputs a single `webm` file (as a `Blob`) potentially "transcoded" ([The process of converting a media asset from one codec to another.](https://w3c.github.io/webmediaguidelines/#dfn-transcoding)) in sequence to a "Mezzanine" file ([4.1 Create a Mezzanine File](https://w3c.github.io/webmediaguidelines/#create-a-mezzanine-file); see also [Scalable Video Coding (SVC) Extension for WebRTC](https://w3c.github.io/webrtc-svc/); https://github.com/w3c/mediacapture-record/issues/4) file (that is seekable, see https://bugs.chromium.org/p/chromium/issues/detail?id=642012).

For example

```
(async() => {
  try {
    const webmVideo = await DecodeConcatVideoData([
      "file.webm#5,10"
      , video.captureStream()
      , canvas.captureStream()
      , audioContextDestination.stream.getAudioTracks()[0]
    ]).catch(e => {throw e});
    // console.log(webmVideo); // `Blob`
  } catch(e) {

  }
})();
```

Please view or discuss this issue at https://github.com/w3c/mediacapture-main/issues/575 using your GitHub account

Received on Monday, 25 March 2019 20:16:43 UTC