[mediacapture-depth] Accessing Camera Calibrations

bbernhar has just created a new issue for 
https://github.com/w3c/mediacapture-depth:

== Accessing Camera Calibrations ==
I see the importance that we consider allowing the UA to access 
calibration data. It would enable many useful computer vision 
scenarios and brings the need to enable at-least three calibrations:

* Spatial coordinate system. This gives us ability to project and 
unproject 2D image space to 3D camera space and map between locations 
of depth and the corresponding color image. Ideally, this wouldn't be 
done on a per-pixel basis and a coordinate system could represent the 
UA's world position too.

* Lens distortion: The UA can remove distortion from the image as not 
all cameras use high-distortion lenses.

* Camera re-sectioning: This can unproject 2D point to 3D space as ray
 (3D surface reconstruction from image or "skeletal tracking").

 The `v1` spec does not mention how or which calibration is used 
unlike the more verbose SDK-specific APIs [1] [2] and it seems 
necessary to expose these intrinsic operations and properties that can
 be performed on streams that belong to the same group of 3D devices.

@anssiko @huningxin WDYT?

[1] https://msdn.microsoft.com/en-us/library/dn785530.aspx
[2] 
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/doc_essential_coordinates_mapping.html

Please view or discuss this issue at 
https://github.com/w3c/mediacapture-depth/issues/110 using your GitHub
 account

Received on Wednesday, 16 March 2016 00:22:21 UTC