W3C home > Mailing lists > Public > public-media-capture@w3.org > October 2014

[mediacapture-depth] Changing how we define cameraParameters

From: Rob Manson <roBman@buildAR.com>
Date: Thu, 09 Oct 2014 07:39:04 +1100
Message-ID: <5435A0E8.3030206@buildAR.com>
To: "public-media-capture@w3.org" <public-media-capture@w3.org>
Hi all,

Here's a simple proposal for an update to where we place the 
cameraParameters and it's name too.

If there are no concerns, the editors will move on and update the 
Editor's Draft[1] per this proposal.

This proposal is based on the assumption that you only want to get the 
cameraParameters once, when the stream is setup and that in that case it 
should be more intimately bound to the MediaStreamSource/Track.

However, I do also accept it will be most usefully used at the DepthData 
level for post-processing (as I think Benjamin pointed out earlier).

Looking through the gUM API in detail it seems like this might be better 
proposed as an extension of the existing states()/MediaSourceStates 
functionality. These already include width and height etc. (and 
aspectRatio which seems redundant but meh). So I'd propose we extend 
this with:

   partial dictionaryMediaSourceStates {
     unsigned long focalLength;
     unsigned long horizontalViewAngle;
     unsigned long verticalViewAngle;

And these would only be relevant when sourceType = video.

I think this may also then need a matching constraint so we could update 
our proposal to be:

   partial dictionaryMediaStreamConstraints {
     (boolean orMediaTrackConstraints) depth = false;
     (boolean orMediaTrackConstraints) cameraIntrinsics = false;

BTW: As above, it would be good if we could also stick with the term 
cameraIntrinsics as that's the more standard term in computer vision 
than cameraParameters.

So with this new cameraIntrinsics constraint it would be optional on 
gUM({video:true, ...}) and mandatory on gUM({depth:true, ... }).

Perhaps there's some other plumbing in there that would need tweaking 
too (AllVideoCapabilities?).

The key point here is that this will minimise the number of calls to get 
cameraIntrinsics, and in the future this could also then open up 
cameraIntrinsics to uses other than those directly related to the 
DepthData. This would allow better quality Augmented Reality and other 
kinds of computer vision just using a standard video media stream.


Received on Wednesday, 8 October 2014 20:33:34 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:30 UTC