Re: [whatwg/fetch] Expose Stream Watermarks (#689)

It is currently possible to store `ImageData` or `ImageBitmap` (video frames) and `AudioBuffer` (audio) objects at the server as raw data, request N chunks of `ImageBitmap` and `AudioBuffer` pairs (as "media segments"; see https://github.com/whatwg/html/pull/2814; also https://github.com/w3c/ServiceWorker/issues/913; https://stackoverflow.com/q/45024915) and stream the presentation by painting the image to `<canvas>` and combining the audio using Web Audio API at a single `MediaStream` set at `srcObject` of a single `<video>` element. This requires creating the media segments which will be requested first, or parsing the file client side to extract the video and audio tracks, then setting those tracks to a `MediaStream` or `MediaSource` https://dev.w3.org/html5/html-sourcing-inband-tracks/.

Have not yet achieved creating a `.webm` file containing both audio and video tracks from the video frame and audio buffer as a distinct media segment at client side only without using `MediaRecorder` first to create a single `.webm` file from one or more of any of the media types which the particular browser can play at `<video>` element first https://github.com/guest271314/MediaFragmentRecorder, though it should also be possible to add audio tracks to the `.webm.` file created at https://github.com/thenickdude/webm-writer-js, where the resulting file can be passed to `appendBuffer()` of `MediaSource` `SourceBuffer` as an `ArrayBuffer`.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/fetch/issues/689#issuecomment-379501971

Received on Saturday, 7 April 2018 21:56:23 UTC