[mediacapture-depth] (16-bit) Grayscale conversion of (16-bit) depth map is wrong

astojilj has just created a new issue for 
https://github.com/w3c/mediacapture-depth:

== (16-bit) Grayscale conversion of (16-bit) depth map is wrong ==
With the time, the "depth map" changed the meaning from abstraction of
 real world distance to a 16-bit representation of depth. This part of
 spec remained and now is wrong - it reads as it converts 16-bit depth
 map to 16-bit grayscale?
Grayscale should be 8 bit, not 16-bit.

The text states:

> The **data type of a depth map is 16-bit** unsigned integer. The 
algorithm to convert the depth map value to grayscale, given a depth 
map value d, is as follows:
> 
> Let near be the the near value.
> Let far be the the far value.
> Apply the rules to convert using range linear to d to obtain 
quantized value **d16bit.**
> Return **d16bit.**
> 

from video:

> 6.6 The video element
> 
> When a video element is potentially playing and its assigned media 
provider object is a depth-only stream, the user agent must, for each 
pixel of the media data that is represented by a depth map, given a 
depth map value d, convert the depth map value to grayscale and render
 the returned value to the screen.
> 
> NOTE
> It is an implementation detail how the frames are rendered to the 
screen. For example, the implementation may use a grayscale or red 
representation, with either 8-bit or 16-bit precision.
> 


Please view or discuss this issue at 
https://github.com/w3c/mediacapture-depth/issues/142 using your GitHub
 account

Received on Sunday, 13 November 2016 18:48:41 UTC