Re: [mediacapture-record] Ability to record non-realtime / frame-by-frame (#213)

My use case is rendering video projects in video editing app which is based on  canvas and webgl

In general the flow is: prepare webgl stage and render it, capture frame somehow, go to next frame, repeat. (this can, and should happen faster than the duration of final video itself - aka it should be possible to export 2 minutes long video in 10 seconds if perfromance allows it)

Currently I use readPixels API, but it is the biggest bottleneck of the rendering pipeline as it requires pixels to be sent from GPU to CPU. It takes ~90% of render time which is quite surprising - all webgl effect, blur filters etc take 10% of the time and 90% of it is needed only to capture pixels data.

Thus I was trying to find another method that avoids that and https://stackoverflow.com/questions/58907270/record-at-constant-fps-with-canvascapturemediastream-even-on-slow-computers/58969196#58969196 this was very promising - I hoped to create MediaRecorder I manually feed frame by frame as quickly as I possibly can where each frame is 1/FPS long

I did create my recorder like:

```ts
import { waitForElementEvent } from "@s/shared/dom/events";
import { wait } from "@s/shared/time";

const waitForRecorderEvent = waitForElementEvent<keyof MediaRecorderEventMap>;

export function createCanvasRecorder(source: HTMLCanvasElement, fps: number) {
  const target = source.cloneNode() as HTMLCanvasElement;
  const ctx = target.getContext("2d")!;
  ctx.drawImage(source, 0, 0);

  const stream = target.captureStream(0);
  const track = stream.getVideoTracks()[0] as CanvasCaptureMediaStreamTrack;

  const recorder = new MediaRecorder(stream, { mimeType: "video/webm;codecs=H264", });

  const dataChunks: Blob[] = [];
  recorder.ondataavailable = (evt) => dataChunks.push(evt.data);

  recorder.start();
  recorder.pause();

  return {
    async captureFrame() {
      const timer = wait(1000 / fps);

      recorder.resume();

      console.log("did resume");
      ctx.clearRect(0, 0, target.width, target.height);
      ctx.drawImage(source, 0, 0);
      track.requestFrame();
      
      await timer;
      recorder.pause();
      console.log("did pause");
    },
    async finish() {
      recorder.stop();
      stream.getTracks().forEach((track) => track.stop());
      await waitForRecorderEvent(recorder, "stop");
      return new Blob(dataChunks);
    },
  };
}
```

and it seems to work. The point is MediaRecorder captures in real-time, thus I have to wait 1 FPS time before I go to the next frame - and now this is the biggest bottleneck and pure waiting takes majority of time.

As a result - exporting 2 minutes long video will never be faster than 2 minutes, even if it is easily doable in terms of rendering all frames faster

-- 
GitHub Notification of comment by pie6k
Please view or discuss this issue at https://github.com/w3c/mediacapture-record/issues/213#issuecomment-1376264577 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Monday, 9 January 2023 20:18:37 UTC