Re: Media Source draft proposal

On Apr 19, 2012, at 4:45 PM, Robert O'Callahan wrote:

> On Fri, Apr 20, 2012 at 6:58 AM, Maciej Stachowiak <mjs@apple.com> wrote:
> It seems to me that this spec has some conceptual overlap with WebRTC, and WebAudio, which both involve some direct manipulation and streaming of media data.
> 
> WebRTC: http://dev.w3.org/2011/webrtc/editor/webrtc.html
> Web Audio API: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
> 
> 
> I actually think those are fairly well separated from this proposal. This proposal is all about manipulating the data that goes into a decoder; WebRTC and the Audio WG are all about manipulating decoded data. The latter two need to be carefully coordinated, but this doesn't.

>From what I can tell, it's not the case that WebRTC manipulates only decoded data.Twp examples: PeerConnection lets you send a MediaStream to a remote peer and receive a MediaStream from remote peer. Surely it is not the case that media data sent over PeerConnection is always decoded? It seems obvious that such data would have to be encoded at least in transit. Likewise, the getRecordedData method in WebRTC generates "a file that containing data in a *format supported by the user agent* for use in audio and video elements" (emphasis added), which is surely encoded data, not decoded data. In fact, I cannot find any case where the WebRTC spec offers any kind of access to decoded media data.

It might be that there is a good reason why receiving a media stream over a peer-to-peer connection and then playing it via a video element should use a completely different API than receiving a stream from a server and then playing it via a video element. But if such a reason exists, it is not documented in either spec. It's definitely not about the difference between encoded vs decoded.

Regards,
Maciej

Received on Friday, 20 April 2012 03:19:54 UTC