- From: guest271314 via GitHub <sysbot+gh@w3.org>
- Date: Tue, 26 Mar 2019 18:52:17 +0000
- To: public-webrtc-logs@w3.org
@Pehrsons > it seems to me that you want to encode the three streams separately and concat them into one webm file. Yes. That is the concept. The reason that `MediaRecorder` is used at all is due to the lack of an API for video similar to `AudioContext.decodeAudioData()` which returns an `AudioBuffer` that can be concatenated to other `AudioBuffer`s. Using `OfflineAudioContext()` the audio media does not need to be played back (audibly) to get and concatenate the `AudioBuffer`s. > Now that can be done by recording them separately, and remuxing in js, no? > This assumes of course, that all the recordings have the same number and type of tracks, and codecs. That is another reason for using `MediaRecorder`, to create uniform `.webm` files. Notice that the media files at `urls` variable at https://github.com/guest271314/MediaFragmentRecorder/blob/master/MediaFragmentRecorder.html have different extensions, which is intentional. **The concept itself (concatenating media fragments) was inspired by [A Shared Culture](https://creativecommons.org/about/videos/a-shared-culture) and [Jesse Dylan](https://mirrors.creativecommons.org/movingimages/webm/ScienceCommonsJesseDylan_240p.webm).** Ideally, media playback should not be necessary at all, if there was a `decodeVideoData()` function which performed similar to `.decodeAudioData()`, and a `OfflineVideoContext()` similar to `render()` functionality of `OfflineAudioContext()` (potentially incorporating `OffscreenCanvas()`) which > doesn't render the [video] to the device hardware; instead, it generates it, as fast as it can Relevant to > Looking at your proposal as a way to support multiple tracks, I'm not sure it's the right fix. For one, it doesn't handle tracks that start or end in parallel to other tracks. the concept is to create the necessary file structure - in parallel - then, if necessary (Chromium does not include cues in recorded `webm`, Firefox does) re-"scan" the file structure to insert the timestamps (cues); consider an array of integers or decimals that is "equalized". The functionality of the feature request should "work" for both streams and static files, or combinations of the two, where the resulting `webm` file is in sequence, irrespective of when the discrete stream or file is decoded -> read -> encoded, similar to the functionality of `Promise.all()`, where the `Promise` at index N could be fulfilled or "settled" before the `Promise` at index `0`. The feature request is somewhat challenging to explain, as there is more than one use case and more than one way in which the API could be used. Essentially a recorder that records (streams and/or static files, e.g., acquired using `fetch()`) in parallel, without becoming inactive, while writing the data, then (and/or "on the fly") "equalizing" the data resulting in a single output: a single `webm` file. Perhaps the feature request/proposal could be divided into several proposals - `decodeVideoData()` => `VideoBuffer` - `OfflineVideoContext()` and [`startRendering()`](https://webaudio.github.io/web-audio-api/#OfflineAudioContext-methods) - for video => without using `HTMLMediaElement` playback which generates a `VideoBuffer` "as fast as it can" - `OfflineMediaRecorder()` => combining the functionality of the above - Extending `MediaRecorder()` and `MediaStream()` => to perform the functionality above, essentially similar to `MediaSource` functionality though since all of those features surround the same subject matter a single API could be created which incorporates all of that functionality. > Since there's so little consensus on supporting multiple tracks (other than if they're there at the start), I think this kind of fairly specific use-case fixes will have an even harder time to fly. Yes, gather that there is no consensus as to supporting multiple tracks. This presentation [Real time front-end alchemy](Soledad Penadés: Real time front-end alchemy) by Soledad Penadés may have touched on that topic. They also stated several important points > so we need people to have weird new ideas ... we need more ideas to break it and make it better > > Use it > Break it > File bugs > Request features What this proposal is attempting to posit is that improvements can be made as to concatenating media streams and static files having differing codecs. If that means a new proposal for a `webm` writer that can decode => read => encode any input stream or static file, that is what this feature request proposes. -- GitHub Notification of comment by guest271314 Please view or discuss this issue at https://github.com/w3c/mediacapture-record/issues/166#issuecomment-476797195 using your GitHub account
Received on Tuesday, 26 March 2019 18:52:18 UTC