RE: Describing recording by means of the Media Source interface

> From: Harald Alvestrand [mailto:harald@alvestrand.no]
> > The only potential drawback with this approach is losing the streaming
> > content that occurs between the time that stop() is called and start() is
> called.
> >
> > To avoid this problem, you can just introduce an API that simply
> > slices the recording at some arbitrary time, allowing the application
> > to extract the recording up to that point, while the recorder
> > continues recording into a new segment (it won't need to keep the
> > prior slice since it's now the application's responsibility). While
> > this technique is not the same as a real-time access to the stream, it
> > does allow the application to choose what chunking interval they need. It
> more of a polling model, then a push model where the application is simply
> handed a chunk of data when it's ready.
>
> At which point you've invented a block-emitting API. Why not just do **that**?

Sorry, I couldn't tell what you meant by **that** (above) do you mean: "invent a block-emitting API"; or do you mean: "go with what you what you just proposed"?

I like either way, but I suspect it's better for app performance and accommodates more use cases to put the developer in control by providing some kind of API to "slice the video stream now, and give me the encoded data since the last time I sliced (and keep recording)".

Received on Tuesday, 28 August 2012 16:08:21 UTC