RE: Describing recording by means of the Media Source interface

> From: Rich Tibbett [mailto:richt@opera.com]
> 
> I've been trying to figure out exactly the purpose of having access to _real-
> time_ buffer data via any type of MediaStream Recording API. It is fairly clear
> that byte-level access to recorded data could be solved with existing
> interfaces, albeit not in real-time as the media is being recorded to a file but
> once a file has already been recorded in its entirety and returned to the web
> app.
> 
> If we could simply start recording of a MediaStream with e.g. .start(), then
> stop it at some arbitrary point, thereby returning a File object [1] then we
> could then pass that object through the existing FileReader API [2] to chunk it
> and apply anything we wish to at the byte-level after the recording has been
> completed.

It seems like if the start/stop APIs were sufficiently fast, you could just use them
as a timeslicing mechanism to get the recording broken up into smaller chunks in 
an encoded format. 

For example:
recorder.start();
setTimeout(getChunk, 1000);

function getChunk() {
    recorder.stop();
    // get the data (likely asynchronously)
    recorder.start();
    setTimeout(getChunk, 1000);
}

The only potential drawback with this approach is losing the streaming content that 
occurs between the time that stop() is called and start() is called.

To avoid this problem, you can just introduce an API that simply slices the recording
at some arbitrary time, allowing the application to extract the recording up to that 
point, while the recorder continues recording into a new segment (it won't need 
to keep the prior slice since it's now the application's responsibility). While this 
technique is not the same as a real-time access to the stream, it does allow the application
to choose what chunking interval they need. It more of a polling model, then a push model 
where the application is simply handed a chunk of data when it's ready.

Received on Thursday, 23 August 2012 20:07:36 UTC