- From: Exatex via GitHub <sysbot+gh@w3.org>
- Date: Mon, 28 Nov 2022 13:56:23 +0000
- To: public-pointer-events@w3.org
@patrickhlauke I would like to weigh in on the "demand" side and heavily vote for a reopen. As @stam already said, this is currently an unsolvable issue for web-based 3D applications and severely limiting using web technologies at all for 3d applications whose controls even remotely resemble non-web industry standards and that need to use all available degrees of freedom for spatial navigation. There is no real acceptable workaround. The underlying assumptions of the current high level solution, which gestures correspond to which intentions, are now just not true anymore I fear. Right now, we have to use heuristics to guess what the original gesture was, and thus have tons of users complaining that with certain input devices the inputs are mis-interpreted, rendering the webapp unusable. I understand there are some concerns on some technical implementations, but it is hard to imagine that whatever solution would be proposed is worse than us training an ML model running in the frontend just to guess the lost information of mouse events. This is actually our current plan if there will be no fix. -- GitHub Notification of comment by Exatex Please view or discuss this issue at https://github.com/w3c/pointerevents/issues/206#issuecomment-1329150896 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Monday, 28 November 2022 13:56:25 UTC