- From: Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>
- Date: Fri, 29 May 2015 12:53:35 +0000
- To: Shijun Sun <shijuns@microsoft.com>, Bernard Aboba <Bernard.Aboba@microsoft.com>, Peter Thatcher <pthatcher@google.com>, "Adam Roach" <adam@nostrum.com>
- CC: "public-webrtc@w3.org" <public-webrtc@w3.org>
On 27/05/15 22:37, Shijun Sun wrote: > On Wednesday, May 27, 2015 3:29 AM Stefan Håkansson LK > [mailto:stefan.lk.hakansson@ericsson.com] wrote: >> >>> On 26/05/15 22:50, Bernard Aboba wrote: [BA] Looking through the >>> Media Capture specification, I do not see mention of a track >>> having an encoding constraint or codec attribute. Can you point >>> out to me where this is described? >> >> It is not described. I tried to clarify the use case described by >> Adam in >> https://lists.w3.org/Archives/Public/public-webrtc/2015May/0130.html >> >> >>I think it is valid, don't you agree? > > Based on my knowledge of the media capture design, the captured data > format is a device/platform internal decision, for example, whether > the video data are uncompressed (e.g. RGB, YUV 420, etc.) or > compressed (MJPEG is typical). It is up to the browser/platform > implementation to make sure to be able to internally consume the > streams by the consumer objects. In case of the RtpSender object, it > can be an internal logic/implementation as to how to consume the data > accordingly based on the encoding params. In practice, it is very > likely we have to re-encode (or transcode) anyway to meet the > networking bandwidth condition, error recovery, etc. - all based on > the encoding params and the RTP runtime decision. > > Please let me know if I'm missing anything. Thanks! I don't think you have missed anything. But the question is if we should exclude the use of camera codecs that are not supported by the platform, to me that seems like a limitation we should not do.
Received on Friday, 29 May 2015 12:54:03 UTC