- From: Bjartur Thorlacius <svartman95@gmail.com>
- Date: Tue, 15 Mar 2011 22:11:33 +0000
On 3/15/11, Robert O'Callahan <robert at ocallahan.org> wrote: > Instead of creating new state signalling and control API for streams, what > about the alternative approach of letting <video> and <audio> use sensors as > sources, and a way to connect the output of <video> and <audio> to encoders? > Then we'd get all the existing state machinery for free. We'd also get > sensor input for audio processing (e.g. Mozilla or Chrome's audio APIs), and > in-page video preview, and using <canvas> to take snapshots, and more... > That would make sense, assuming you are suggesting reusing objects representing media elements, rather than using HTML elements.
Received on Tuesday, 15 March 2011 15:11:33 UTC