W3C home > Mailing lists > Public > public-media-capture@w3.org > November 2016

[mediacapture-main] Add device calibration and position (intrinsics and extrinsics) to MediaTrack

From: Aleksandar Stojiljkovic via GitHub <sysbot+gh@w3.org>
Date: Fri, 11 Nov 2016 13:04:22 +0000
To: public-media-capture@w3.org
Message-ID: <issues.opened-188755960-1478869461-sysbot+gh@w3.org>
astojilj has just created a new issue for 

== Add device calibration and position (intrinsics and extrinsics) to 
MediaTrack ==
In mediacapture-depth extension, we need access to capture device 
(color or depth camera) intrinsics and extrinsics.
This is the data that enables compensating camera distortion and 
mapping pixels from different camera's capture.
Further explanation is in the [comment 
Naming and semantics of the data is fairly standardized among 
different device vendors:
 and [Intel RealSense 

double focalLengthX // used to calculate horizontal field of view 
double focalLengthY // used to calculate vertical field of view 
double principalPointX // coordinate on the image, in pixels.   
double principalPointY // coordinate on the image, in pixels.
string distortionModel; // name of distortion model, several different
 in use but with similar logic.   
double distortionParameters[5] // Kinect names 3 parameters.
width // already in MediaTrackSettings
height // already in MediaTrackSettings

Position of cameras to common reference position (vector with origin).
 Used for projecting from one camera space to another.
double[4] rotation, // quaternion defining rotation
double[3] translation // vector defining translation

MediaTrackSettings looks like it could fit these too, but we could 
also define additional MediaTrackCalibration or MediaTrackCaptureInfo.

Please view or discuss this issue at 
https://github.com/w3c/mediacapture-main/issues/416 using your GitHub 
Received on Friday, 11 November 2016 13:04:30 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:38 UTC