[webrtc-encoded-transform] Introduce a RTCEncodedFrameMetadata dictionary to share more definitions between audio and video metadata (#245)

youennf has just created a new issue for https://github.com/w3c/webrtc-encoded-transform:

== Introduce a RTCEncodedFrameMetadata dictionary to share more definitions between audio and video metadata ==
We could introduce:
```
dictionary RTCEncodedFrameMetadata {
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    unsigned long rtpTimestamp;
    DOMHighResTimeStamp receiveTime;
    DOMHighResTimeStamp captureTime;
    DOMHighResTimeStamp senderCaptureTimeOffset;
    DOMString mimeType;
};
dictionary RTCEncodedAudioFrameMetadata : RTCEncodedFrameMetadata {
    ...
};
dictionary RTCEncodedVideoFrameMetadata : RTCEncodedFrameMetadata {
    ...
};
```

Similarly, for RTCEncodedAudioFrame  and RTCEncodedVideoFrame, we  could introduce:
```
interface mixing RTCEncodedFrame {
    attribute ArrayBuffer data;
};
RTCEncodedAudioFrame includes RTCEncodedFrame;
RTCEncodedVideoFrame includes RTCEncodedFrame;
```

Please view or discuss this issue at https://github.com/w3c/webrtc-encoded-transform/issues/245 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Friday, 18 April 2025 16:17:48 UTC