- From: Piotr Bialecki <notifications@github.com>
- Date: Mon, 19 Apr 2021 12:44:02 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3ctag/design-reviews/issues/620/822736127@github.com>
> Can you give a few examples of where plane detection becomes important in the context of a WebXR application? Sure! Access to information about flat surfaces detected in users' environment could be used by applications for example for object placement (hit test is also useful for that, but it's way more limited, i.e. w/ plane detection, the app could be fairly confident that the placed object actually fits in the user's space). The application could also attempt to display something like a game board / arena that could even be generated based on the plane's polygon. Another use case could be for example physics - a site could use the detected planes to compute how virtual objects should behave in order to make them look like they're interacting with the real world. I can also think of a bit more far-fetched scenarios - maybe the site could use the planes when computing how audio propagates in the scene and use that for some kind of echo effect? In general, the use cases could be considered similar to those of hit test, except that hit test API requires pre-subscribing to a hit test and describing the ray that will be used as relative to an XRSpace (which, roughly speaking, is something that the XR system needs to know how to track) - this limits the apps' options a bit. When an app retrieves a collection of planes from the XR system, it could use that information to hit test against those planes in a more ad-hoc manner, not being limited to using XRSpaces (but, on the other hand, the hit test API can potentially yield better results since XR system can use more information than just detected planes when computing hit test results). One more difference is that hit test was considered the lowest common denominator that all XR systems should be able to support and can serve as a fall-back for the apps that could offer better experiences with more information about the environment if those capabilities (like plane detection) are available. > For example, the quantization of planes is mentioned as a mitigation strategy against privacy attacks in the questionnaire response but this is not mentioned in the spec itself. I think this would be a lot stronger (and less prone to fingerprinting) if the quantization and other mitigation strategies were spelled out in the spec. Thanks for the feedback! I'll add quantization to the spec draft - I have already mentioned other strategies in [Privacy & Security Considerations](https://immersive-web.github.io/real-world-geometry/plane-detection.html#privacy-security) section, but missed it. :frowning_face: I'll add a mention to those strategies into the algorithms explicitly as one of the optional steps that user agents could do, hopefully this would also serve as a forcing function to make sure that implementers consider privacy implications. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/w3ctag/design-reviews/issues/620#issuecomment-822736127
Received on Monday, 19 April 2021 19:44:15 UTC