Encoders (Re: Getting rid of SDP)

On 03/06/2018 12:10 AM, Peter Thatcher wrote:
> On Mon, Mar 5, 2018 at 3:06 PM Sergio Garcia Murillo
> <sergio.garcia.murillo@gmail.com
> <mailto:sergio.garcia.murillo@gmail.com>> wrote:
>     More flexibility, more work, more bug surface.. ;)
>     Anyway, I am not particularly against having access to RTP packets
>     from the encoders,
> Encoders do not emit RTP packets.  They emit encoded video frames. 
> Those are then packetized into RTP packets.  I'm suggesting the
> JS/wasm have access to the encoded frame before the packetization. 

Actually, encoders usually take raw framebuffers (4:2:0, 4:4:4 or other
formats) + metadata and emit video frames + metadata. It may be
crucially important to get a handle on what the metadata looks like, in
order to make sure we are able to transport not just the bytes of the
frame, but the metadata too.

Metadata includes things like timing information (carried through the
encoding process), interframe dependencies (an output from the encoding
process) and preferences for encoding choices (input to the encoding

We need to make sure we're not imagining things to be simpler than they
actually are.

>     just wanted to check if it was a must-have requirement or a
>     nice-to-have. The full frame access is already used on mpeg-dash
>     (IIRC) so that should be supported, and I would also like to have
>     an stream-pipeline like api in which rtp packets don't need to go
>     via the app to be forwarded from the encoder to the rtp transport
>     to the dtls transport to the ice transport..
>     Best regards
>     Sergio
>     On 05/03/2018 23:53, Peter Thatcher wrote:
>>     However the JS/wasm wants :).  It could use one frame == one QUIC
>>     stream or one RTP packet == one QUIC stream or even one big RTP
>>     packet == one frame == one QUIC stream.  I prefer the first, but
>>     the API should allow any of these.
>>     On Mon, Mar 5, 2018 at 2:50 PM Sergio Garcia Murillo
>>     <sergio.garcia.murillo@gmail.com
>>     <mailto:sergio.garcia.murillo@gmail.com>> wrote:
>>         On 05/03/2018 23:22, Peter Thatcher wrote:
>>         > If you want to send media over QUIC or do your own crypto
>>         between
>>         > encode and network (perhaps over some low-level RTP
>>         transport), then
>>         > you need access to media after its encoded and before it's
>>         decoded.
>>         Peter, one side-question, how do you envision that media over
>>         quic
>>         should be used? Do you plan to encapsulate RTP over quic or
>>         just send
>>         each full frame over a single quic stream? Just trying to
>>         imagine what
>>         api surface should the encoders/decoders expose.
>>         Best regards
>>         Sergio

Received on Tuesday, 6 March 2018 07:28:54 UTC