[webrtc-nv-use-cases] A little thought about the one way media use case (#93)

bdrtc has just created a new issue for https://github.com/w3c/webrtc-nv-use-cases:

== A little thought about the one way media use case ==
Very excited to see that the one way media scene has been added to the latest webrtc nv use case. 
Regarding the scenarios inside, I have a few questions.
1. About N40, N41, it seems that what we want to add is support for pre-encoded video frames, 
   but does not include pre-encoded audio frames? ,N40 does not describe whether it is an audio 
   encoder or a video encoder?
2. Live streaming is a typical one way media scene, and live streaming usually requires video 
   to support B-frame transmission and audio to support AAC encoding. Do we need to add such 
   capability requirements to one way media? The advantage of adding these two capability requirements 
   is that when pre-encoded audio and video are injected through the one way media interface and stream 
   to media server via rtp then distributed through the RTMP/HTTP-FLV/HLS/DASH protocol, the media distribution side 
  does not need to do audio conversion (opus->aac), support Video B frames can be used to improve video quality, 
  which is also a common practice in streaming media scenarios, Supporting this capability requires some modification of the 
 protocol layer(rtp, sdp) of webrtc, but it will make webrtc more suitable for live streaming scenarios and more compatible in media 
 part with other live streaming protocols.
  

Please view or discuss this issue at https://github.com/w3c/webrtc-nv-use-cases/issues/93 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Tuesday, 17 January 2023 03:23:33 UTC