[w3ctag/design-reviews] [wg/media] Media Working Group Charter (Issue #1082)

w3cbot created an issue (w3ctag/design-reviews#1082)

This issue was created because the 'horizontal review requested' label was added to
ยง https://github.com/w3c/strategy/issues/504

This review is requested prior to the [Advisory Committee Review](https://www.w3.org/guide/process/charter.html#ac-review).

New charter proposal, reviewers please take note.

# Charter Review

- [Charter](https://w3c.github.io/charter-media-wg/)
- [Diff from charter template](https://services.w3.org/htmldiff?doc1=https://w3c.github.io/charter-drafts/charter-template.html&doc2=https%3A%2F%2Fw3c.github.io%2Fcharter-media-wg%2F)
- [Diff from previous charter](https://services.w3.org/htmldiff?doc1=https%3A%2F%2Fwww.w3.org%2F2023%2F06%2Fmedia-wg-charter.html&doc2=https%3A%2F%2Fw3c.github.io%2Fcharter-media-wg%2F)
- [Chair dashboard](https://www.w3.org/PM/Groups/chairboard.html?gid=wg/media)

This is an **existing WG recharter**

## Communities suggested for outreach

None in particular. Through group participation, agenda topics, and joint meetings, the Media WG has regular exchanges with the WebRTC Working Group, the Timed Text Working Group and the Media & Entertainment Interest Group (which, in turn, exchanges on media topics with a number of external organizations).

## Known or potential areas of concern

Where would charter proponents like to see issues raised? On the [`w3c/charter-media-wg` issue tracker](https://github.com/w3c/charter-media-wg/issues)

## Anything else we should think about as we review? 

The draft charter mentions two additional issues in the description of the work planned on the Encrypted Media Extensions deliverable. These are more "maintenance" than "new features" but it seemed worth expliciting given that the scope of work on EME remains restricted.

The WebCodecs section also describes planned work to give applications the ability to use an `HTMLMediaElement` and buffering mechanisms defined in Media Source Extensions (MSE) to achieve adaptive playback of media that originates from WebCodecs structures, including creating a way to represent encrypted audio/video chunks within WebCodecs so that playback can leverage EME.

By itself, the possibility to represent encrypted audio/video chunks in WebCodecs does not extend EME in any way (and EME will not need to be updated at all for that). In a typical scenario, an encrypted encoded media chunk (`EncodedAudioChunk` or `EncodedVideoChunk`) will be passed to MSE and connected to a `<video>` element. Playback of the encrypted content will then handled by EME, as for any other encrypted content that reaches a `<video>` element.

By definition, encrypted media chunks cannot be decoded as-is into a raw `VideoFrame`, although the spec does not preclude creating an "opaque" `VideoFrame`. If implementations supported this, such a `VideoFrame` could perhaps in turn perhaps be used in other scenarios. Here is an analysis of these other scenarios to help reviewers grasp the ins and outs of the proposal:
1. A `VideoFrame` can be injected into a `<video>` element through a [`VideoTrackGenerator`](https://w3c.github.io/mediacapture-transform/#video-track-generator). With encrypted chunks, this would achieve the same thing as connecting it to a `<video>` through MSE+EME (apps would have to handle the buffering themselves, but that's orthogonal to encryption).
2. A `VideoFrame` can be directly injected into a `<canvas>` element. Injecting an encrypted `VideoFrame` into a `<canvas>` will not work out of the box since that would de facto reveal the bytes. The concept of a "tainted canvas", used for cross-origin content, could perhaps be extended for encrypted `VideoFrame`s. For that to work, EME would also need to be integrated to `<canvas>`. If all that happens, this would make it easier to apply content protection to still images, but this is actually already doable in practice, see discussion in https://github.com/w3c/webcodecs/issues/41#issuecomment-2739895730
3. A `VideoFrame` can be imported as external texture into WebGL and WebGPU pipelines. As with `<canvas>`, injecting an encrypted `VideoFrame` will simply not work today because these pipelines do not have provisions for encrypted content, and would require significant work on the APIs and implementations. Should WebGL and WebGPU decide to add support for content protected textures, they would likely want to leverage encrypted structures that can more directly be used as textures than `VideoFrame` in any case.

Cc @chrisn, @marcoscaceres.

## Charter facilitator(s)

cc @tidoust


-- 
Reply to this email directly or view it on GitHub:
https://github.com/w3ctag/design-reviews/issues/1082
You are receiving this because you are subscribed to this thread.

Message ID: <w3ctag/design-reviews/issues/1082@github.com>

Received on Friday, 18 April 2025 15:08:16 UTC