Re: First draft available

On Wed, Nov 30, 2011 at 11:38 AM, Robin Berjon <robin@berjon.com> wrote:

> We will need to sort out the boilerplate



> maybe Media Capture API, or Local Media Capture API?
>

Media Capture API is definitely a more catchy title than getusermedia!

getUserMedia() uses the success/error callback approach, which is
> increasingly disliked. I would suggest either returning an object on which
> handlers can be set:
>
> var um = navigator.getUserMedia(options);
> um.onsuccess = function (stream) { ... };
> um.onerror = function (err) { ... };
> um.start();
>

This seems more consistent with other APIs such as IndexedDB and FileReader.

One issue that remains fully open is the capture of stills. A typical use
> case would be to acquire the video feed from a camera in order for it to
> serve as a viewfinder in the application, and then be able to trigger a
> still capture from the device. Note that frame-grabbing is not at all an
> option here since you will never get anywhere near the quality that the
> device can produce.
>

I'm very interested in this use case because I'm in the process of
developing a camera web app.

Perhaps this could be achieved via a "capture" method, similar to the
"record" method of a MediaStream (
http://www.w3.org/TR/2011/WD-webrtc-20111027/#methods-3). An object similar
to MediaStreamRecorder (
http://www.w3.org/TR/2011/WD-webrtc-20111027/#mediastreamrecorder) could be
returned ("MediaStreamCapture"?) where the blob is in an image file in a
format supported by the user agent, rather than an audio or video file.

Alternatively, for increased privacy an "image" boolean attribute could be
added to the MediaStreamOptions interface (
http://dev.w3.org/2011/webrtc/editor/getusermedia.html#mediastreamoptions).
In this case a video stream could be displayed to the user in the user
agent UI along with a "capture" button which then returns only the still
image data inside a MediaStream object in the callback, then the
MediaStream immediately transitions to an ENDED state. This means that
content only ever gets access to the still image the user selected rather
than a live video stream, but seems a little messier.

I'm also wondering whether camera settings (such as flash on/off and
exposure time) and image processing (such as brightness and contrast)
should be exposed through the API, or left down to the user agent in order
to keep the API surface small.

Ben

-- 
Ben Francis
http://tola.me.uk
http://krellian.com

Received on Thursday, 1 December 2011 09:34:07 UTC