RE: Image Capture Proposal, third version

Thanks for the comments.

> I think to make life easy for authors the ImageCapture object should look as much as possible like MediaRecorder. I guess we can't reuse MediaRecorder (I know you just removed it from there, which I think was the right thing to do) but I think it would make sense to make the API match MediaRecorder as much as possible. For example, setPhotoSettings() should just be setOptions().

A proposed goal for the group is to have all 3 specs (Recording, Media Capture & Streams, Image Capture) move away from settings to a constrainable interface.  Do you have an opinion as to the way constraints are currently defined in the Media Capture & Streams spec?

> What's the use-case for the frameGrabber stuff? If it's real-time video processing, I think we need a completely different API that's based on Web Workers. I'll ignore the frameGrabber stuff for now.

It was decided during the last conf. call to remove this method.

> For example, are multiple takePhoto() calls allowed? What happens if the author makes multiple takePhoto() calls?

If there are multiple invocations of the takePhoto() method that are spaced out insufficiently in time such that the image capture device is not ready to capture a fresh image, then the onphotoerror event handler should be invoked.  The current text stating “If the UA is unable to execute the takePhoto() method for any other reason, then the UA must fire a PhotoErrorEvent event at the ImageCapture object with a new PhotoError object whose code is set to PHOTO_ERROR.” was supposed to cover this requirement, but I can be more explicit in the text.

As another point of reference, the Camera API my organization implemented for the old Android browser (see https://developer.qualcomm.com/books/qualcomm-html5-device-apis/camera-module) defined a captureImage() method that behaved this way.


>"videoStreamTrack" should just be "track". "onphotoerror" can probably just be "onerror".

Will do. It has already been commented that there are too many event handlers.  Probably one “onerror” event handler would make sense, along with a richer error object returns.  I’ll simplify this in the next version of the spec.

> There is no point in firing events synchronously to indicate the success or failure of methods such as setPhotoSettings. You can raise an exception instead if necessary.

Actually, I proposed an event handler onphotsettingschange to also indicate when photo settings were changed directly by the end user.  It wasn’t meant to just indicate the success/failure of a setPhotoSettings() call.

>Does setPhotoSettings/setOptions affect the contents returned in the video track? Or just the data returned on the onphoto callback?

Photo settings should not affect the constraints applied to the VideoStreamTrack.  They only affect the object (Blob, ImageData) returned to the appropriate event handler.  The mechanisms for modifying VideoStreamTrack constraints is currently defined in http://dev.w3.org/2011/webrtc/editor/archives/20130320/getusermedia.html#methods-1.


>'whiteBalanceModeSetting' can just be "whiteBalance'. Instead of an enum which simply maps to Kelvin, why not make it an unsigned long in Kelvin and have the UA map that to the nearest value it supports?

Fine – I can change this in the next version. However, I believe this is consistent with this setting’s counterpart in native OS’s (e.g. http://developer.android.com/reference/android/hardware/Camera.Parameters.html) that do not provide Kelvin mappings.

>'contrast', 'brightness', 'exposureCompensation', 'iso', 'saturation', and 'sharpness' seem to be completely undefined. That's bad.

I can certainly add better definition than that which is currently in the spec.  I wanted to get some level of agreement on the settings currently proposed first.  I’ll address this in the next version.

> imageHeight and imageWidth are especially problematic. Generally these can't be set independently so instead of providing a min and max for each one, you probably want to either export a list of allowed (width,height) pairs, or better still just expose maxImageWidth and maxImageHeight and let the app specify any smaller size when setting the options.

I don’t disagree with the suggestion – in fact I think it is a “must-have” for both the recording and image capture API’s.  It also maps more cleanly to some native OS implementations (see Android’s getSupportedPictureSizes: http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getSupportedPictureSizes%28%29). Moreover, it is consistent with the Camera API from QuIC that I referred to before.

I’d like to keep the API conventions the same for both Recording and Image Capture.  Recording currently has a width/height setting, which is why I carried over the approach to this spec.   But I agree that this has to change.

> Firing events synchronously to indicate success or failure of API calls seems unnecessary. If the call fails, throw an exception (or silently ignore). If it succeeds, do nothing. That's the general rule for Web APIs.

As mentioned before - I’d like to keep the API conventions consistent with the Recording API, which currently has an onerror event handler.

When I survey other W3C specs regarding event targets, it is a mixed bag as to whether onerror/onsuccess event handlers are defined.  For instance, the current version of FileReader defines an onerror event handler (http://www.w3.org/TR/FileAPI/#dfn-onerror).

From: rocallahan@gmail.com [mailto:rocallahan@gmail.com] On Behalf Of Robert O'Callahan
Sent: Friday, March 29, 2013 12:33 AM
To: Mandyam, Giridhar
Cc: Travis Leithead; public-media-capture@w3.org
Subject: Re: Image Capture Proposal, third version

On Mon, Mar 25, 2013 at 3:00 PM, Mandyam, Giridhar <mandyam@quicinc.com<mailto:mandyam@quicinc.com>> wrote:
Yes, a VideoStreamTrack constructor makes sense, but simply having it will not obviate the need for validation.  This is because the sourceType of the VideoStreamTrack would have be verified as to whether it is ‘camera’ or ‘photo-camera’ (see Sec. 4.3.3 of the latest Media Capture and Streams spec), and takePhoto() is prohibited on a ‘camera’ sourceType currently.

I don't see why distinguishing "photo-camera" from "camera" is useful for authors. Video cameras could support the takePhoto() API just fine. So I suggest simply allowing ImageCapture for any VideoStreamTrack.

I think to make life easy for authors the ImageCapture object should look as much as possible like MediaRecorder. I guess we can't reuse MediaRecorder (I know you just removed it from there, which I think was the right thing to do) but I think it would make sense to make the API match MediaRecorder as much as possible. For example, setPhotoSettings() should just be setOptions().

What's the use-case for the frameGrabber stuff? If it's real-time video processing, I think we need a completely different API that's based on Web Workers. I'll ignore the frameGrabber stuff for now.

A lot needs to be clarified here. For example, are multiple takePhoto() calls allowed? What happens if the author makes multiple takePhoto() calls?

"videoStreamTrack" should just be "track". "onphotoerror" can probably just be "onerror".

There is no point in firing events synchronously to indicate the success or failure of methods such as setPhotoSettings. You can raise an exception instead if necessary.

Does setPhotoSettings/setOptions affect the contents returned in the video track? Or just the data returned on the onphoto callback?

'whiteBalanceModeSetting' can just be "whiteBalance'. Instead of an enum which simply maps to Kelvin, why not make it an unsigned long in Kelvin and have the UA map that to the nearest value it supports?

In fact all members of PhotoSettings (which should probably be called ImageCaptureOptions if we make MediaRecorder.setOptions take a RecorderOptions) can lose their "Setting" suffix.

'autoExposureMode' should be declared ExposureModeEnum (which should be called ExposureMode, no need for "Enum"). It could use a more precise definition.

'contrast', 'brightness', 'exposureCompensation', 'iso', 'saturation', and 'sharpness' seem to be completely undefined. That's bad.

PhotoSettingsOptions seems over-abstracted. What's the point of MediaSettingsItem anyway? Attributes like whiteBalanceMode can just be declared as values of the correct type. The value of MediaSettingsRange is also unclear. As in MediaRecorder, I think it would be much simpler to examine these options on a case-by-case basis and either define the allowed range in the spec, or if the supported range of an option must be device dependent *and* exposed to apps, add individual attributes for the min and/or max limit of each option.

imageHeight and imageWidth are especially problematic. Generally these can't be set independently so instead of providing a min and max for each one, you probably want to either export a list of allowed (width,height) pairs, or better still just expose maxImageWidth and maxImageHeight and let the app specify any smaller size when setting the options.

Firing events synchronously to indicate success or failure of API calls seems unnecessary. If the call fails, throw an exception (or silently ignore). If it succeeds, do nothing. That's the general rule for Web APIs.

Rob
--
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"

Received on Wednesday, 3 April 2013 23:46:16 UTC