- From: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
- Date: Tue, 6 Mar 2018 10:43:01 +0100
- To: public-webrtc@w3.org
- Message-ID: <28bad407-920a-5587-f75a-eaa4a901678b@gmail.com>
On 06/03/2018 8:28, Harald Alvestrand wrote: > On 03/06/2018 12:10 AM, Peter Thatcher wrote: >> On Mon, Mar 5, 2018 at 3:06 PM Sergio Garcia Murillo >> <sergio.garcia.murillo@gmail.com >> <mailto:sergio.garcia.murillo@gmail.com>> wrote: >> >> More flexibility, more work, more bug surface.. ;) >> >> Anyway, I am not particularly against having access to RTP >> packets from the encoders, >> >> >> Encoders do not emit RTP packets. They emit encoded video frames. >> Those are then packetized into RTP packets. I'm suggesting the >> JS/wasm have access to the encoded frame before the packetization. > > Actually, encoders usually take raw framebuffers (4:2:0, 4:4:4 or > other formats) + metadata and emit video frames + metadata. It may be > crucially important to get a handle on what the metadata looks like, > in order to make sure we are able to transport not just the bytes of > the frame, but the metadata too. > > Metadata includes things like timing information (carried through the > encoding process), interframe dependencies (an output from the > encoding process) and preferences for encoding choices (input to the > encoding process). > > We need to make sure we're not imagining things to be simpler than > they actually are. True, but the good thing is that we don't have to reinvent the wheel, there are multiple encoder/decoder api abstractions that we can use as input for our design. Even libwebrtc has an API+metadata design internally.. ;) Best regards Sergio
Received on Tuesday, 6 March 2018 09:43:20 UTC