RE: Image Capture Proposal, third version

Hello Travis,
Thanks for the feedback.


Ø  Now that getUserMedia spec is updated to include derived VideoStreamTrack objects, perhaps you can narrow down your Constructor's type to be a VideoStreamTrack instead of the generic MediaStreamTrack? (This would also ease some of your validation steps in various APIs.) Same feedback for the readonly attribute "videoStreamTrack" (which could be renamed to just "track" or "src" for brevity.



Yes, a VideoStreamTrack constructor makes sense, but simply having it will not obviate the need for validation.  This is because the sourceType of the VideoStreamTrack would have be verified as to whether it is 'camera' or 'photo-camera' (see Sec. 4.3.3 of the latest Media Capture and Streams spec), and takePhoto() is prohibited on a 'camera' sourceType currently.



Ø  I'm not sure if BlobEvent will stay in the current MediaCapture&Streams document. Either way, one of those specs shouldn't need to re-define it. Not sure which spec it belongs in...?



It should be in the Recording API spec because it precedes the Image Capture spec.  http://dev.w3.org/2009/dap/ReSpec.js/biblio.html doesn't have the Recording API referenced yet, so I just reproduced BlobEvent  in the Image Capture spec .  I can manually add in the reference to the Recording API spec.



Ø  I think the new Event object definitions for errors are overkill. Can we just error state info to an attribute on the ImageCapture interface, and remove FrameGrabErrorEvent, PhotoErrorEvent, and SettingsErrorEvent, instead firing generic events, and using the event.target to read the state.



Will take this into consideration for the next version.



Ø  It seems like we need to make PhotoSettings match the constraints/state/capabilities model between the Recording API, this spec and the Media Capture and Streams spec.



I'll reference Section 3 of the latest version of the Media Capture and Streams Specification:  "Constraints are stored on the track object, not the source."  I take this to mean that constraints are attributes of the MediaStreamTrack.  PhotoSettings are not supposed to modify the MediaStreamTrack (VideoStreamTrack) constraints.  Since we don't have a MediaStreamTrack that is photo-specific, I am not sure the constraints model makes sense.



In addition, please refer to the source/track distinction of Section 3 in the Media Capture and Streams spec:  "When a source is connected to a track, it must conform to the constraints present on that track (or set of tracks)."  PhotoSettings are usually source settings, and since there are no equivalent (photo-specific) constraints defined currently for  the VideoStreamTrack it would not be possible for the source's PhotoSettings to conform to any constraints on the track.





Ø  o   I don't see why the individual state attributes couldn't be put directly on the ImageCapture object (rather than living in isolation)?



Can you expand on what you had in mind?  This isn't a track, therefore it doesn't have comparable state to a VideoStreamTrack.



Ø  o   You'll want to be able to apply constraints (of these same names) onto the Image Capture object (in order to select or scope various settings) using the common constraint read/write APIs defined for Media Capture and Streams.



In addition to what I wrote above regarding the definition of constraints, if I follow the model of the Media Capture and Streams Spec then I may also have to define the constraints for the IANA registry (see Sec. 10 of that document).  This would also apply to the Recording API specification.  Seems like overkill to me.



Ø  o   You'll want to be able to read the capabilities of a given implementaton's ImageCapture settings - I think that's what photoSettingsOptions gives you at the moment.



Yes, that was my intention with photoSettingsOptions.  I think this works roughly in the same manner as the constraints() method defined on MediaStreamTrack (sec. 4.3 of the latest Media Capture and Streams spec).

-Giri



From: Travis Leithead [mailto:travis.leithead@microsoft.com]
Sent: Wednesday, March 20, 2013 3:40 PM
To: Mandyam, Giridhar; public-media-capture@w3.org
Subject: RE: Image Capture Proposal, third version

Nice Work!

Additional feedback:

*         Now that getUserMedia spec is updated to include derived VideoStreamTrack objects, perhaps you can narrow down your Constructor's type to be a VideoStreamTrack instead of the generic MediaStreamTrack? (This would also ease some of your validation steps in various APIs.) Same feedback for the readonly attribute "videoStreamTrack" (which could be renamed to just "track" or "src" for brevity.

*         I'm not sure if BlobEvent will stay in the current MediaCapture&Streams document. Either way, one of those specs shouldn't need to re-define it. Not sure which spec it belongs in...?

*         I think the new Event object definitions for errors are overkill. Can we just error state info to an attribute on the ImageCapture interface, and remove FrameGrabErrorEvent, PhotoErrorEvent, and SettingsErrorEvent, instead firing generic events, and using the event.target to read the state.

*         It seems like we need to make PhotoSettings match the constraints/state/capabilities model between the Recording API, this spec and the Media Capture and Streams spec. Along those lines:

o   I don't see why the individual state attributes couldn't be put directly on the ImageCapture object (rather than living in isolation)?

o   You'll want to be able to apply constraints (of these same names) onto the Image Capture object (in order to select or scope various settings) using the common constraint read/write APIs defined for Media Capture and Streams.

o   You'll want to be able to read the capabilities of a given implementaton's ImageCapture settings - I think that's what photoSettingsOptions gives you at the moment.

From: Mandyam, Giridhar [mailto:mandyam@quicinc.com]
Sent: Tuesday, March 19, 2013 4:55 PM
To: public-media-capture@w3.org<mailto:public-media-capture@w3.org>
Subject: Image Capture Proposal, third version

Hello All,
Thanks for all the good feedback on the second version of this spec.  I have modified the spec accordingly and am enclosing as an attachment.  It can also be found at http://gmandyam.github.com/image-capture/.

The first version of the spec is accessible at: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0059.html (in attachment)
The second version of the spec is accessible at: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0090.html (in attachment)

Summary of changes:


1.       Johannes' comments: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0093.html

Have made PhotoSettings a dictionary



Travis's follow up on the prohibition  of a dictionary being used as an attribute:  http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0100.html

I think I'm covered here because the readonly attribute on the Image Capture object is a non-dictionary object called PhotoSettingsOptions.



As per Johannes follow up in http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0104.html

I will modify the example when the W3C Canvas spec is changed accordingly.  I agree with you that we need to resolve whether MediaStreamTracks are in fact visible in workers or not.  Currently, there is at least one multimedia processing API (WebAudio) that is not available in a worker.  It would seem strange to be able to do video processing in a worker but not audio processing, so maybe this issue should be resolved more holistically.



2.       Adam's comments:  http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0096.html

a.       Reworded the event firing text wherever it occurs in the document as per your suggestions.  Please take a look.

b.      Added a reference to the Canvas spec at the first mention of ImageData.

c.       Changed examples as per your suggestions.  Thanks for the corrections!



3.       Tim T.'s suggestion (http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0099.html) regarding an asynchronous frame capture API that would work more like how requestAnimationFrame does for repaint is IMO a good one.  I've tried to accomplish this by defining a frameGrabber() method.  This is modeled after the watchPosition method in the Geoloc. API spec, but it is not callback based.  In this way, a single event handler can be defined for both the one-shot frame request (getFrame) and frameGrabber.  I don't know if this is the best way to do things, so I am certainly open to any feedback/suggestions that anyone may have.  I will add an example in the examples section if I receive feedback from the group that this is the best way to go forward.


-Giri

Received on Monday, 25 March 2013 02:00:51 UTC