W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > June 2019

Re: [mediacapture-record] Why is start 5.3 in the specification? (#168)

From: guest271314 via GitHub <sysbot+gh@w3.org>
Date: Fri, 07 Jun 2019 00:00:18 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-499711480-1559865617-sysbot+gh@w3.org>
@alvestrand For clarity, the use case is not necessarily to record and _write_ multiple individual tracks, but to write a single video track, which is already possible for audio using `.connect(createMediaStreamSource(MediaStream([audioTrack])))` or `.connect(createMediaStreamTrackSource(audioTrack))`. The same functionality should be possible for video tracks, without having to use `requestAnimationFrame`, Web Animation API. `ReadableStream`, etc.


Re 

> I am not aware of any change in the landscape of container formats that seems to indicate that varying the number of tracks is a generally available option. If you know of such changes, please provide references.

Is the concern variable video width, height, and audio channel and playback rate, etc.?

What exactly do you mean by "varying the number of tracks"?

From a cursory point of view one of the WebM VOD patterns http://wiki.webmproject.org/adaptive-streaming/webm-vod-baseline-format should be able to achieve the requirement?

Have been considering composing a very basic media container format (not necessarily with compression as the model, but creation in modern FOSS browsers using only the API's shipped with FOSS browsers), where for every 1 second _N_ (e.g., 25; 30; 60) images and 1 or more audio are in 1 "packet"; the images can have any `width` or `height`; any inidividual images "packet" playback rate can be variable in relation to the adjacent "packet" or "media chunk"; each audio can have any playback rate; the individual media segments can be re-arranged in any given order with `currentTime` being reflected in output relevant to the adjacent media chunks, if any, while a note about the original source segment slice `currentTime` can also being included in the data; either can be rendered while not breaking the arbitrary playback sequence, in essence having the possiblity of _multiple_ `currentTime` and or `duration`; all created using Web Audio API, Media Recorder API (as a pipe to create media chunks and files), Media Stream API, capable of being "streamed", as a file, i.e., `video/x-browser-created-media` (`.browsercreatedmedia`) over HTTP; from the browser, to the browser, using various means; a simple array of arrays `[audio, video]` with the ability to add text, etc.

Ultimately trying to do something like 

`$ mkvmerge -w -o int_all.webm int.webm + int1.webm`

for static files and/or live "media streams".

Not a novel concept.

Would be unnecessary if the media encoder, media decoder, webm writer and web media player internal code was exposed as API's, for users to decode, encode and write their own media, without the boundaries of specifications or implementation of the day.

FWIW, the WebRTC and Media Capture and Streams API's are ingenious:+1:

-- 
GitHub Notification of comment by guest271314
Please view or discuss this issue at https://github.com/w3c/mediacapture-record/issues/168#issuecomment-499711480 using your GitHub account
Received on Friday, 7 June 2019 00:00:20 UTC

This archive was generated by hypermail 2.3.1 : Friday, 7 June 2019 00:00:20 UTC