- From: Travis Leithead <travis.leithead@microsoft.com>
- Date: Wed, 20 Mar 2013 22:39:54 +0000
- To: "Mandyam, Giridhar" <mandyam@quicinc.com>, "public-media-capture@w3.org" <public-media-capture@w3.org>
- Message-ID: <9768D477C67135458BF978A45BCF9B3853C4DB80@TK5EX14MBXW602.wingroup.windeploy.ntde>
Nice Work! Additional feedback: * Now that getUserMedia spec is updated to include derived VideoStreamTrack objects, perhaps you can narrow down your Constructor's type to be a VideoStreamTrack instead of the generic MediaStreamTrack? (This would also ease some of your validation steps in various APIs.) Same feedback for the readonly attribute "videoStreamTrack" (which could be renamed to just "track" or "src" for brevity. * I'm not sure if BlobEvent will stay in the current MediaCapture&Streams document. Either way, one of those specs shouldn't need to re-define it. Not sure which spec it belongs in...? * I think the new Event object definitions for errors are overkill. Can we just error state info to an attribute on the ImageCapture interface, and remove FrameGrabErrorEvent, PhotoErrorEvent, and SettingsErrorEvent, instead firing generic events, and using the event.target to read the state. * It seems like we need to make PhotoSettings match the constraints/state/capabilities model between the Recording API, this spec and the Media Capture and Streams spec. Along those lines: o I don't see why the individual state attributes couldn't be put directly on the ImageCapture object (rather than living in isolation)? o You'll want to be able to apply constraints (of these same names) onto the Image Capture object (in order to select or scope various settings) using the common constraint read/write APIs defined for Media Capture and Streams. o You'll want to be able to read the capabilities of a given implementaton's ImageCapture settings - I think that's what photoSettingsOptions gives you at the moment. From: Mandyam, Giridhar [mailto:mandyam@quicinc.com] Sent: Tuesday, March 19, 2013 4:55 PM To: public-media-capture@w3.org Subject: Image Capture Proposal, third version Hello All, Thanks for all the good feedback on the second version of this spec. I have modified the spec accordingly and am enclosing as an attachment. It can also be found at http://gmandyam.github.com/image-capture/. The first version of the spec is accessible at: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0059.html (in attachment) The second version of the spec is accessible at: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0090.html (in attachment) Summary of changes: 1. Johannes' comments: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0093.html Have made PhotoSettings a dictionary Travis's follow up on the prohibition of a dictionary being used as an attribute: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0100.html I think I'm covered here because the readonly attribute on the Image Capture object is a non-dictionary object called PhotoSettingsOptions. As per Johannes follow up in http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0104.html I will modify the example when the W3C Canvas spec is changed accordingly. I agree with you that we need to resolve whether MediaStreamTracks are in fact visible in workers or not. Currently, there is at least one multimedia processing API (WebAudio) that is not available in a worker. It would seem strange to be able to do video processing in a worker but not audio processing, so maybe this issue should be resolved more holistically. 2. Adam's comments: http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0096.html a. Reworded the event firing text wherever it occurs in the document as per your suggestions. Please take a look. b. Added a reference to the Canvas spec at the first mention of ImageData. c. Changed examples as per your suggestions. Thanks for the corrections! 3. Tim T.'s suggestion (http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0099.html) regarding an asynchronous frame capture API that would work more like how requestAnimationFrame does for repaint is IMO a good one. I've tried to accomplish this by defining a frameGrabber() method. This is modeled after the watchPosition method in the Geoloc. API spec, but it is not callback based. In this way, a single event handler can be defined for both the one-shot frame request (getFrame) and frameGrabber. I don't know if this is the best way to do things, so I am certainly open to any feedback/suggestions that anyone may have. I will add an example in the examples section if I receive feedback from the group that this is the best way to go forward. -Giri
Received on Wednesday, 20 March 2013 22:41:12 UTC