W3C home > Mailing lists > Public > public-webrtc@w3.org > March 2018

Re: Getting rid of SDP

From: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
Date: Tue, 6 Mar 2018 00:06:14 +0100
To: Peter Thatcher <pthatcher@google.com>
Cc: public-webrtc@w3.org
Message-ID: <3e2d5ba7-3673-15da-ddf4-942019bac12b@gmail.com>
More flexibility, more work, more bug surface.. ;)

Anyway, I am not particularly against having access to RTP packets from 
the encoders, just wanted to check if it was a must-have requirement or 
a nice-to-have. The full frame access is already used on mpeg-dash 
(IIRC) so that should be supported, and I would also like to have an 
stream-pipeline like api in which rtp packets don't need to go via the 
app to be forwarded from the encoder to the rtp transport to the dtls 
transport to the ice transport..

Best regards
Sergio

On 05/03/2018 23:53, Peter Thatcher wrote:
> However the JS/wasm wants :).  It could use one frame == one QUIC 
> stream or one RTP packet == one QUIC stream or even one big RTP packet 
> == one frame == one QUIC stream.  I prefer the first, but the API 
> should allow any of these.
>
>
> On Mon, Mar 5, 2018 at 2:50 PM Sergio Garcia Murillo 
> <sergio.garcia.murillo@gmail.com 
> <mailto:sergio.garcia.murillo@gmail.com>> wrote:
>
>     On 05/03/2018 23:22, Peter Thatcher wrote:
>     > If you want to send media over QUIC or do your own crypto between
>     > encode and network (perhaps over some low-level RTP transport), then
>     > you need access to media after its encoded and before it's decoded.
>
>     Peter, one side-question, how do you envision that media over quic
>     should be used? Do you plan to encapsulate RTP over quic or just send
>     each full frame over a single quic stream? Just trying to imagine what
>     api surface should the encoders/decoders expose.
>
>     Best regards
>
>     Sergio
>
>
Received on Monday, 5 March 2018 23:06:33 UTC

This archive was generated by hypermail 2.3.1 : Monday, 5 March 2018 23:06:34 UTC