- From: Lachlan Hunt <lachlan.hunt@lachy.id.au>
- Date: Thu, 17 Mar 2011 10:09:08 +0100
On 2011-03-17 04:22, Robert O'Callahan wrote: > On Thu, Mar 17, 2011 at 4:36 AM, Lachlan Hunt<lachlan.hunt at lachy.id.au>wrote: > >> I'm not entirely sure I understand your proposal, but are you suggesting >> that the input streams from the camera/microphone would first go to<video> >> and<audio> elements, allowing the existing HTMLMediaElement API on those >> elements to be used to control those streams, the output of which can then >> be encoded and recorded to a file or streamed remotely? > > Yes. > >> In fact, of all the properties that are on HTMLMediaElement, the only ones >> that seem to have any real use for streaming media are volume, muted, paused >> and ended. So I'm not convinced that it's a good idea to try and reuse >> existing APIs simple for the sake of reusing them, when they aren't really a >> good fit. > > As Olli said,<video> and<audio> are designed to support streaming media; > streaming over a low-latency network is very much like streaming from a > local device. Yes, for playback, that's fine, and that's what we've already implemented in our experimental implementation of the device element. But you seemed to be suggesting that it was sufficient for the purposes of controlling and encoding the video stream, despite the fact that those controls are not at all suited to that. Such control needs to occur before the video element in the chain, not after it. ---------- ------------------- ----------- | Camera | --> | GeneratedStream | --+-------> | <video> | ---------- ------------------- | ----------- | --------- ----------------- | Codec | --> | Recorded blob | --------- ----------------- | | ------------------ +-------> | PeerConnection | ------------------ The stated of the stream, in terms of what gets streamed over P2P or recorded locally, must be controlled at the GeneratedStream and given as input into the codec. This includes things like controlling the input microphone volume, video height and width, etc. In particular, the encoded height and width for streaming may differ significantly from the rendered height and width in the local video preview, so this is not something that can be controlled by the video element itself. > In Gecko, we allow seeking within cached segements of streamed video, and we > could easily allow that for local devices too --- user-controlled "instant > replay". We don't buffer any streamed data in our initial device implementation and seeking is not possible. -- Lachlan Hunt - Opera Software http://lachy.id.au/ http://www.opera.com/
Received on Thursday, 17 March 2011 02:09:08 UTC