Re: Experimenting with video processing pipelines on the web

------ Original message ------
From: "Dale Curtis" <dalecurtis@google.com>
To: "Francois Daoust" <fd@w3.org>
Cc: "public-media-wg@w3.org" <public-media-wg@w3.org>; "Dominique 
Hazaël-Massieux" <dom@w3.org>; "Bernard Aboba" 
<Bernard.Aboba@microsoft.com>; "Peter Thatcher" 
<pthatcher@microsoft.com>
Date: 04/04/2023 20:02:28

>Thanks for this list! I've tagged a couple issues for prioritization.
>
>- dale
>
>On Wed, Mar 29, 2023 at 2:14 AM Francois Daoust <fd@w3.org> wrote:
>>Thanks, Dale!
>>
>>I cannot think of possible changes to WebCodecs itself that are not
>>already captured in open issues. Main ones that come to mind being:
>>
>>[...]
>>
>>2. The possibility to create a VideoFrame out of a GPUBuffer directly,
>>instead of having to create a canvas which is not really needed in
>>theory, and/or perhaps to know when a VideoFrame is GPU backed, 
>>tracked
>>in:
>>   https://github.com/w3c/webcodecs/issues/83
>
>Can you elaborate on the use case for this? I can see it being useful 
>for readback or drawing, but I thought WebGPU already has its own 
>mechanisms for that.
The use case I have in mind is when an application wants to process a 
VideoFrame with WebGPU and wants a VideoFrame out of it, for further 
processing, encoding, transport or whatever, just not directly for 
rendering.

As opposed to WebGL, a WebGPU device can be operated without being 
attached to a canvas, see e.g.:
https://gpuweb.github.io/gpuweb/explainer/#canvas-output

As such, a streamlined approach could be:
1. request a WebGPU device and initialize it (done only once).
2. Set up shaders and the like, call imporExternalTexture to import the 
VideoFrame to GPU.
3. Run the GPU command queue.
4. Create a new VideoFrame from the resulting GPUBuffer or GPUTexture.

This is not possible today because the VideoFrame constructor only 
accepts a canvas or a BufferSource. Creating a BufferSource from a 
GPUBuffer or GPUTexture is doable but would force a copy to CPU memory, 
if I understand things correctly. The approach thus needs to be:
1. request a WebGPU device and initialize it (done only once).
2. Create a canvas and attach the WebGPU device to it (done only once).
3. Set up shaders and the like, call imporExternalTexture to import the 
VideoFrame to GPU.
4. Run the GPU command queue, which will render the result to the 
canvas.
5. Create a new VideoFrame from the canvas.

Since the goal is not to render the processed VideoFrame immediately, 
the canvas is just needed as an intermediary interface to connect WebGPU 
and WebCodecs. Now, creating a canvas is really not a complex task, and 
going through a canvas probably does not have any performance 
implication, so I don't know whether extending the VideoFrame 
constructor is worth the effort.

Francois.

Received on Wednesday, 5 April 2023 10:23:45 UTC