- From: Cullen Jennings <fluffy@iii.ca>
- Date: Tue, 13 Mar 2018 16:21:58 -0600
- To: Harald Alvestrand <harald@alvestrand.no>
- Cc: public-webrtc@w3.org
- Message-Id: <D156463D-B72E-45E5-9D0E-FE517CA57C7A@iii.ca>
> On Mar 6, 2018, at 12:28 AM, Harald Alvestrand <harald@alvestrand.no> wrote: > > On 03/06/2018 12:10 AM, Peter Thatcher wrote: >> >> >> On Mon, Mar 5, 2018 at 3:06 PM Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com <mailto:sergio.garcia.murillo@gmail.com>> wrote: >> More flexibility, more work, more bug surface.. ;) >> >> Anyway, I am not particularly against having access to RTP packets from the encoders, >> >> Encoders do not emit RTP packets. They emit encoded video frames. Those are then packetized into RTP packets. I'm suggesting the JS/wasm have access to the encoded frame before the packetization. > > > Actually, encoders usually take raw framebuffers (4:2:0, 4:4:4 or other formats) + metadata and emit video frames + metadata. It may be crucially important to get a handle on what the metadata looks like, in order to make sure we are able to transport not just the bytes of the frame, but the metadata too. > > Metadata includes things like timing information (carried through the encoding process), interframe dependencies (an output from the encoding process) and preferences for encoding choices (input to the encoding process). Good point - so to list some of theses list of what metadata video encoders produce * if it is a reference frame or not * resolution * frame-rate ? * capture time of frame list of what metadata video encoders needs. * capture timestamp * source and target resolution * source and target frame-rate * target bitrate * max bitrate * max pixel rate Is that type of thing you were thinking about? What needs to be added.
Received on Tuesday, 13 March 2018 22:22:35 UTC