Re: [w3ctag/design-reviews] WebXR Raw Camera Access API (#652)

Is there any distinction to be made between headsets and handheld devices here?

Currently with handheld "WebAR" experiences based on getUserMedia, the camera frames are composited with content into a WebGL canvas. That canvas can be captured via captureStream() and Media Recorder, and then shared via Web Share. This is all with using client-side APIs, and with user consent for camera access. It seems the right balance to me between privacy and capability. It effectively allows many of the fun "AR filter" effects from social media apps to be available on the web.

It would be a shame if WebXR sessions (which have the potential to offer better quality tracking with lower power usage) were not able to be used in the same way, I don't see any major difference privacy-wise on handheld devices vs the getUserMedia approach.

It's seems a non-starter to have an audible announcement every 10 seconds "RECORDING IN PROGRESS" for any use of the camera in the web - that would get annoying very quickly in Google Meet calls...

At least in the EFF article the main concerns seem to be around the always-on nature of headsets. One option would be strongly recommending some form of visual indicator that is visible to bystanders when camera access is used? Not all hardware (Oculus Quest) would have the capability, but it's likely more of a concern for some future always-on AR eyewear.

-- 
Reply to this email directly or view it on GitHub:
https://github.com/w3ctag/design-reviews/issues/652#issuecomment-1082895868
You are receiving this because you are subscribed to this thread.

Message ID: <w3ctag/design-reviews/issues/652/1082895868@github.com>

Received on Wednesday, 30 March 2022 10:14:10 UTC