Re: Describing recording by means of the Media Source interface

On 08/23/2012 07:10 PM, Rich Tibbett wrote:
> Harald Alvestrand wrote:
>> On 08/23/2012 06:52 PM, Josh Soref wrote:
>>> Rich wrote:
>>>> How is this not possible with the following existing pipeline:
>>>>
>>>> MediaStream -> HTMLAudioElement -> Web Audio API [1] -> WebSockets ->
>>>> ASR Service
>>> Technically, I think you can do something similar for video:
>>>
>>> MediaStream -> HTMLVideoElement -> HTMLCanvas.drawImage() ->
>>> HTMLCanvas.toDataURL()
>> at 30 FPS?
>>
>> The result would be akin to a Motion JPEG, only in PNG, I think...
>
> But that's the use case being requested in the Media Recording API 
> too, right? Except that you can't control how much data you get of how 
> many samples you take of the video per second.
That's where we need the constraints.... estimating how much data you 
get at a given framerate and resolution is pretty routine, once you know 
the codec.
>
> You can pull data from the canvas at whatever interval you like, less 
> than 30 times per second typically. You can also sample more or less 
> frequently depending on performance of course.
Motion JPEG, and even simpler schemes, is used on some USB cameras. One 
reason why HD cameras frequently use on-camera H.264 encoding is that 
simpler encodings create too much data for carrying over an USB 2.0 
interface.

The difference between the amount of data you need to handle for 
uncompressed video and compressed video is .... pretty dramatic.

Another thing you need, and don't get with these interfaces, is 
audio/video synchronization. That's a reason why the MediaSource spec 
talks about the WebM (Matroska) file format, not an audio stream or a 
video stream.

Received on Thursday, 23 August 2012 17:17:33 UTC