- From: Rob Manson <roBman@buildAR.com>
- Date: Fri, 27 Jun 2014 18:12:38 +1000
- To: public-media-capture@w3.org
Hi Harald, > Just to make sure I understand .... it seems to me that if these > values are the same for the depth sensor and for the image sensor, > this is essentially a no-op - important if you want to match the image > to Real Life or world models, but not for matching a depth map to the > image map. > > Is it common that these properties are significantly different for the > depth sensor and the image sensor? They are separate sensors so it is logically possible that they can be different. For example the Kinect can definitely deliver an RGB stream at one aspect ratio/fov and a depth stream at another. But our proposal is that we enforce the calibration at the native/sdk level so we can generally assume that the fov for both sensors will be the same. However, exposing at least the focal length (or the elements needed to calculate that as discussed below) is critical to mapping the 2d image plane points to the real world for scene reconstruction or calibration of virtual overlays/AR. > Again asking the naive questions .... if you know the horizontal view > angle and the aspect ratio, isn't the vertical view angle given? > > And from my understanding of focal length, if you have a known size of > the sensor, the focal length will give you the view angles (and vice > versa - which means that you can compute the sensor size from the view > angle and the focal length, if anyone thinks that's useful....) Not naive at all...they're great questions 8) Yep calculating fov's is possible using simple trig based on the focal length (adjacent) and half the image width or height (opposite)...and vice versa. > I have no problems with making all these variables available, but if > some of them are computable from the others, I'd like to make sure > that's part of the definition of those variables. Totally agree...as long as we have a minimum set that makes the general pinhole camera model calculable then we're good. Our current view is that focal length, principal point (centre of fov - usually the centre of the image) and the image width/height are the minimum that are required. Of course the units of the focal length need to be able to be related to the image width/height in some way - either using the image pixel density per mm if focal length is in mm's, or all could be converted to the same pixel units. Hope that answers your questions and all makes sense. roBman
Received on Friday, 27 June 2014 08:11:46 UTC