W3C home > Mailing lists > Public > public-media-capture@w3.org > August 2014

Re: [depth] depth value encoding proposal

From: Benjamin Schwartz <bemasc@google.com>
Date: Tue, 19 Aug 2014 12:01:49 -0400
Message-ID: <CAHbrMsADtT24SGzPgaAWeWSnmikWN4fw9Xw5mALNceHZzGq7dA@mail.gmail.com>
To: "Kostiainen, Anssi" <anssi.kostiainen@intel.com>
Cc: "Hu, Ningxin" <ningxin.hu@intel.com>, "public-media-capture@w3.org" <public-media-capture@w3.org>, Rob Manson <roBman@buildar.com>
On Tue, Aug 19, 2014 at 10:58 AM, Kostiainen, Anssi <
anssi.kostiainen@intel.com> wrote:

> Hi Benjamin,
> Thanks for your comments, and sorry for the late reply due to the vacation
> period.
> I noticed the following comments (1) and (2) you’ve made, and would like
> to check the status, and ask your help to fill in the gaps if any:
> (1) "I do not think this kind of arbitrary mapping is appropriate in a W3C
> standard.  We should arrange for a natural representation instead, with
> modifications to Canvas and WebGL if necessary.” [1]
> To make sure I understood this correctly:
> Firstly, you're proposing we patch the CanvasRenderingContext2D and make
> ImageData.data of type ArrayBufferView instead of Uint8ClampedArray to
> allow the Uint16Array type, correct? Better suggestions?

I would suggest going all the way to Float32 as well.

Secondly, extend the WebGLRenderingContext along the lines of the
> LUMINANCE16 extension proposal [2] -- or would you prefer to use the depth
> component of a GL texture as you previously suggested? Known issues? I’d
> like to hear your latest thoughts on this and if possible a concrete
> proposal how you’d prefer this to be spec’d to be practical for developers
> and logical for implementers.

I'd strongly prefer to use a depth component (and also have an option to
use 32-bit float).  This would raise the GLES version requirement for
WebGL, but I think this is not an unreasonable requirement for a feature
that also requires a depth camera!

> (2) "I don't think getDepthTracks() should return color-video.  If you
> want to prototype using depth-to-color mappings, the logical way is to
> treat the depth channel as a distinct color video camera, accessible by the
> usual device selection mechanisms.” [3]
> This is what the spec currently says re getDepthTracks():
> [[
> The getDepthTracks() method, when invoked, must return a sequence of
> MediaStreamTrack objects representing the depth tracks in this stream.
> The getDepthTracks() method must return a sequence that represents a
> snapshot of all the MediaStreamTrack objects in this stream's track set
> whose kind is equal to "depth". The conversion from the track set to the
> sequence is user agent defined and the order does not have to be stable
> between calls.
> ]]
> Do you have a concrete proposal how you’d tighten the prose to clear the
> confusion?

I would not include |getDepthTracks| until we have a depth datatype.
 Instead, I would add each depth camera as an input device of kind: 'video'
in the list returned by enumerateDevices(), with its label containing a
human-readable indication that its color video data is computed from depth
via a mapping defined by the user agent.

[Btw. we’ll track open issues for this spec using GH issues [4] to make
> sure we don’t miss any of the feedback.]
> Thanks,
> -Anssi
> [1]
> http://lists.w3.org/Archives/Public/public-media-capture/2014Jul/0087.html
> [2] https://www.khronos.org/bugzilla/show_bug.cgi?id=407
> [3]
> http://lists.w3.org/Archives/Public/public-media-capture/2014Jul/0091.html
> [4] https://github.com/w3c/mediacapture-depth/issues
Received on Tuesday, 19 August 2014 16:02:24 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:29 UTC