- From: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
- Date: Wed, 11 Jul 2018 10:38:43 +0200
- To: public-webrtc@w3.org
- Message-ID: <ddc78c15-7404-9220-4da6-39a835121010@gmail.com>
On 11/07/2018 8:48, Harald Alvestrand wrote: > Den 11. juli 2018 02:26, skrev IƱaki Baz Castillo: >> >> The problem of having each SVC layer defined in a RTCRtpEncodingParameters entry (within encodings array) is that all those entries belonging to the same stream must share LOT of fields (such as pt, ssrc, etc). Super error prune IMHO. > RTCRtpEncodingParameters doesn't contain either pt or ssrc. Perhaps it should have for the simulcast case (didn't we have an issue on that?), but at present it doesn't. Could we at least agree that we will only cover the SRST case with no PT-multiplexing, that would simplify things a lot. If the simple approach is not accepted (just specifying the number of spatial and temporal layer per stream), we could add a svc layer definition within each of the encodings. For example for VP9 SVC with SL3TL2, it would look something like this (each layer encoding could have other attributes like mas bitrate, proction priority, etc): var encodings = [ { //..other encoding parameters.. layers : [ {id:"sl0tl0", resolutionScale: 4.0, framerateScale: 2.0 }, {id:"sl0tl1", resolutionScale: 4.0, framerateScale: 1.0, dependencies: ["sl0tl0"] }, {id:"sl1tl0", resolutionScale: 2.0, framerateScale: 2.0, dependencies: ["sl0tl0"] }, {id:"sl1tl1", resolutionScale: 2.0, framerateScale: 1.0, dependencies: ["sl1tl0","sl0tl1"] }, {id:"sl2tl0", resolutionScale: 1.0, framerateScale: 2.0, dependencies: ["sl1tl0"] }, {id:"sl2tl1", resolutionScale: 1.0, framerateScale: 1.0, dependencies: ["sl1tl0","sl2tl0"] } ] } ]; And for simulcast with two simulcast layers, the first with two temporal layers and the high res with three var encodings = [ { rid: "lowres", resolutionScale: 2.0, //..other encoding parameters.. layers : [ {id:"tl0", framerateScale: 2.0 }, {id:"tl1", dependencies: ["tl0"] } ] }, { rid: "highres", encodingId: "H0", resolutionScale: 1.0, //..other encoding parameters.. layers : [ {id:"tl0", framerateScale: 4.0 }, {id:"tl1", framerateScale: 2.0, dependencies: ["tl0"] }, {id:"tl2", dependencies: ["tl1"] } ] } ]; If we decide to get rid of the layer dependency definition, as for example VP8 and VP9 requires that each layer depends only on the lower temporal and spatial layer, we could have something like this instead: var encodings = [ { //..other encoding parameters.. layers : [ {sid:0, tid:0, resolutionScale: 4.0, framerateScale: 2.0 }, {sid:0, tid:1, resolutionScale: 4.0, framerateScale: 1.0 }, {sid:1, tid:0, resolutionScale: 2.0, framerateScale: 2.0 }, {sid:1, tid:1, resolutionScale: 2.0, framerateScale: 1.0 }, {sid:2, tid:0, resolutionScale: 1.0, framerateScale: 2.0 }, {sid:2, tid:1, resolutionScale: 1.0, framerateScale: 1.0, } ] } ]; And for simulcast: var encodings = [ { rid: "lowres", resolutionScale: 1.0, //..other encoding parameters.. layers : [ {tid:0, framerateScale: 2.0 }, {tid:1} ] }, { rid: "highres", encodingId: "H0", resolutionScale: 1.0, //..other encoding parameters.. layers : [ {tid:0, framerateScale: 4.0 }, {tid:1, framerateScale: 2.0 }, {tid:2 } ] } ]; Best regards Sergio
Received on Wednesday, 11 July 2018 08:37:43 UTC