- From: Rich Tibbett <richt@opera.com>
- Date: Mon, 27 Aug 2012 17:08:40 +0200
- To: Harald Alvestrand <harald@alvestrand.no>
- CC: "public-media-capture@w3.org" <public-media-capture@w3.org>
Harald Alvestrand wrote: > > On 08/27/2012 02:47 PM, Rich Tibbett wrote: >> Harald Alvestrand wrote: >>> Changing the subject line, since this has absolutely nothing to do with >>> the syntax or semantics of the resolution constraints.... >> >> I believe this is a discussion on the merits of either having a >> responsive approach to constraint management vs. specifying up-front >> constraints at web application start-up. >> >> That really boils down to a question of user experience when using >> getUserMedia and is not a decision between mandatory vs optional >> constraints. Both are still supported either way. > That's what I don't understand in this discussion. > > Given that we both agree that it's better for the user if the developer > writes responsive applications, and given that we both agree that the > application writer can write both responsive and non-responsive > applications whether getUserMedia supports mandatory constraints or not, > what is it in this discussion that matters to whether a browser should > support mandatory constraints in getUserMedia? It's because an app developer is requesting constraints blindly at invocation. Being able to query the capability ranges available once a camera has been provided allows a developer to make a much more informed decision on what to choose. > >> Agreed. But giving developers up-front constraints as an 'easy-out' to >> avoid responsibility for providing responsive service to as many users >> as possible in a heterogeneous web environment is going to fragment >> and damage the long term health of the web. >> >> Making the browser responsible for rejection of service up-front is >> also a convenient way to defer developer responsibility here. That >> would be bad. > > That part totally doesn't parse for me. All the difference I see is that > the developer has to put code into the error handler instead of in the > success handler; the developer is still 100% responsible in all cases, > there is no automatic UI of rejection. Up-front constraints is going to invoke the error handler much more than a responsive approach would because it's a blind request for ideal capabilities. Responsive design would avoid that and push a lot more camera streams through the success handler. > > Out of sequence quote: > >>> What I'd like to therefore see instead is something similar to the >>> following: >>> >>> getUserMedia({ video: true } >>> }, successCallback, errorCallback); >>> >>> function successCallback(stream) { >>> // attempt to scale up to HD resolution (if it is supported >>> // on this device) >>> stream.videoTracks[0].width = 1024; >>> stream.videoTracks[0].height = 768; >> This is, of course, totally user-hostile. >> >> If the user has just bought a 4k camera, and his browser, his computer >> and his network all support that, he has a reasonable expectation that >> he should just plug it in in place of his old camera, and it should Just >> Work - because the app has (correctly) told the browser that there are >> *minimum* requirements, but no *maximum* requirements. >> >> Your code example, if approached in this fashion, is actively hurting >> the future-proofing of the Web. > (quote from Rich commenting on the above below, copy/paste didn't quite > work right) >> >> You may have to explain this statement as I don't quite follow. >> >> In your example, the old camera was flat out rejected up-front without >> much ado. In the responsive approach the old camera worked, albeit at >> a lower resolution, because the developer was incentivized to make it >> work to improve the user experience of their web application (because >> otherwise the user just got a 'Can't do' message). The new camera just >> happened to provide higher resolution than the old camera but still >> worked in the same way. > This is not an argument about constraints, it is an argument against the > particular API you seem to be suggesting as an alternative. > > What your code does (and I think it's a very likely scenario that many > people will write their code this way, if constraints aren't easily > available) is that it *clamps* the resolution to 1024x768, as long as > the camera supports it at all. In the first year of that code's > deployment, it will effectively mean "1024x768 or lower", because > 1024x768 is a "high end camera", and better cameras are very rare. > > Most cameras that support better resolution than 1024x768 are also > capable of supporting 1024x768. So three years later, the meaning is > "1024x768, and no higher". It should be noted that having .width and .height doesn't mean we can't report the web cameras capabilities to the developer via readonly e.g. .maxHeight, .minHeight, .maxWidth, .minWidth attributes on each media stream track object. I had these types of properties in mind in the same way as included in the original proposal for manipulating camera stream tracks [1] (e.g. maxZoom - where minZoom is always 0 and therefore not included as an attribute). A developer would set the width and height of a media stream track via .width and .height attributes within the bounds of the minimum and maximum bounds reported. So actually you could have any variation of the following: function successCallback(stream) { for(var i = 0, l = stream.videoTracks.length; i < l; i++) { stream.videoTracks[i].width = stream.videoTracks[i].maxWidth; stream.videoTracks[i].height = stream.videoTracks[i].maxHeight; } } This is good because: a.) the user can set an absolute resolution instead of with upfront constraints as follows: minWidth: 360, minHeight: 240, maxWidth: 2048, maxHeight: 1536. In which case, what would the resolution of the video provided actually be set to?. Any rounding of absolute resolutions up or down is left for the specification to say. b.) the user can set specific resolutions per device instead of with upfront constraints applied to all devices as follows: minWidth: 360, minHeight: 240, maxWidth: 2048, maxHeight: 1536. What's the resolution of each track if there are two cameras with different ranges?. c.) the developer can change video resolution at run-time based on the reported .minWidth/minHeight and .maxWidth/maxHeight ranges per device. d.) the developer can up-scale or down-scale service based on different cameras being used or changes to the users environment (network connection speeds). e.) the application can always stay up to date with the latest capabilities of a user's camera. If one day I change my camera then the app could always return the best quality resolution possible, instead of having hard-coded limits at getUserMedia invocation. > > Building in obsolescence in this way is not good for the user. No obsolescence is intended here. Being responsive means the same application up-scales or down-scales service over time. I didn't include min/max width/height in my previous descriptions but it should be supported on each a camera stream track along with the ability to set the preferred resolution within those reported ranges. [1] http://lists.w3.org/Archives/Public/public-media-capture/2012Aug/0032.html
Received on Monday, 27 August 2012 15:09:16 UTC