RE: RGBD and just D cameras

Sounds great!

Happy holidays!

-ningxin

> -----Original Message-----
> From: Rob Manson [mailto:robman@mob-labs.com]
> Sent: Monday, December 16, 2013 6:07 AM
> To: Hu, Ningxin; Kostiainen, Anssi; public-media-capture@w3.org
> Subject: Re: RGBD and just D cameras
> 
> Hi Ningxin,
> 
> yep I was using RGBD to refer to RGB+Depth and D to refer to Depth-only.
> 
> My initial outline was only talking about the channel structures too.
> 
> I absolutely agree with your points that there are encoding/matching issues in
> the RGBD context that are related to image frame size, framerate, bit depth and
> time synchronisation between the RGB and the D channels.
> 
> e.g.
> 
> - D usually has smaller image frame size than RGB
> - D usually has slower or different frame rate to RGB
> - D almost always has lower bit depth than RGB
> - D always has a different timestamp from RGB as they are separate sensors
> 
> And there is also the issue of possibly different unit scales across different
> vendor sensors as you describe.
> 
> I'm definitely interested in discussing your WebGL/shader based mapping too!
> 
> I'll contact you and Anssi off list and we can start working on the extension
> proposal in the early new year.
> 
> Happy holidays 8)
> 
> roBman
> 
> 
> On 13/12/13 8:39 PM, Hu, Ningxin wrote:
> >> -----Original Message-----
> >> From: Rob Manson [mailto:robman@mob-labs.com]
> >> Sent: Friday, December 13, 2013 2:38 AM
> >> To: Kostiainen, Anssi; public-media-capture@w3.org
> >> Cc: Hu, Ningxin
> >> Subject: Re: RGBD and just D cameras
> >>
> >> Hi Anssi,
> >>
> >> thanks for the reply 8)
> >>
> > Thanks!
> >
> >>>> I'll get feedback and input from people interested and then make a
> >>>> proposal
> >> based on that.
> >>> It'd be great if you could share the feedback with the group.
> >> I'd be interested to hear Ningxin's thoughts on this...but I believe
> >> the input can simplistically be broken into 3 groups.
> >>
> >> - depth only (equiv to a single channel grayscale video track)
> >> - rgbd (equiv to standard video track with an extra channel)
> > So the depth only is for depth-only (D) camera and rgbd is for color-depth
> (RGB-D) camera. Is my understanding correct?
> >
> > One open is about the depth image format, currently, as I know, both Kinect
> sensor and Ceative Senz3D camera use 16 bit to represent a pixel in depth image.
> And how about the unit, how many micrometers for one unit?
> >
> > Another open might be the synchronization between RGB and D capture,
> existing RGB-D cameras usually has different frame-rate and capture size for RGB
> sensor and D sensor.
> >
> > Besides, an open is the mapping from coordination from depth image to color
> image. As you know, the RGB sensor and D sensor usually have different
> coordination system. So the hardware uses some pattern to calibrate them and
> provide this info via a dedicated channel, e.g. uvmap channel in Creative Senz3D
> camera. This info is important for some use cases, such as 3D re-construction.
> >
> >> - metadata (e.g. OpenNI type data after post-processing/analysis)
> >>
> >> This obviously excludes camera intrinsics which is a whole other
> >> discussion that still needs to be had at some point and is beyond the
> >> current constraints/capabilities discussions.
> >>
> >> This also excludes the discussion about programmatically generated
> >> localstreams/tracks which is also a separate thread.
> >>
> >> I think we only really need to focus on the first two as the third is
> >> probably most easily delivered via WebSockets and DataChannels e.g.
> >> http://js.leapmotion.com/tutorials/creatingConnection and
> >> https://github.com/buildar/awe_kinect
> >>
> > Agree with you.
> >
> >>>>> If anyone is interested in this they can contact me off the list.
> >>> Ningxin who just joined the DAP WG and this Task Force (welcome!) is
> >>> an
> >> expert with depth cameras. He is also a Chromium committer and will
> >> be experimenting with this feature in code. I'll let Ningxin
> >> introduce the work he's doing.
> >>
> >> I'm definitely looking forward to seeing working demos and hearing
> >> more about Ningxin's work 8)
> >>
> > I experimentally implemented a "depth" video constraint of gUM in Chromium,
> it works on Creative Senz3D camera.
> > Basically, web app is able to use:
> > getUserMedia ({video: {mandatory: { depth: "grayscale" }},
> > successCallback, errorCallback) to obtain a grayscale depth video stream. Web
> app could use another normal gUM to get the RGB video stream.
> > I also investigated how to expose the D-to-RGB coordination map and how to
> use this info, say with WebGL.
> >
> >> Happy to take this off list to work on the extension proposal unless
> >> people want this thread to continue here?
> >>
> >> I also think we should provide Use Case updates for the Scenarios document
> too.
> >> http://www.w3.org/TR/capture-scenarios/
> >>
> >>
> >>> I'm happy to see more interest in support of this feature. Let's
> >>> work together
> >> to flesh out a proposal.
> >>
> >> +1
> >>
> > +1 :)
> >
> > Thanks,
> > -ningxin
> >
> >> roBman
> >
> >
> >

Received on Monday, 16 December 2013 01:37:24 UTC