- From: Dave Raggett <dsr@w3.org>
- Date: Fri, 21 Jun 2024 09:20:44 +0100
- To: public-cogai <public-cogai@w3.org>
- Message-Id: <DEFAC87E-D42A-4CB6-A80A-C672073DBB47@w3.org>
The immersive web is the emerging suite of standards for extended reality experiences. I summarised these in a recent article for ERCIM News: https://ercim-news.ercim.eu/en137/special/open-standards-for-the-immersive-web Since then I have worked on updating my vision for web-based AR/VR from 1994, see: https://www.w3.org/People/Raggett/vrml/vrml.html In particular, I want to show how to make extended reality experiences accessible to people with disabilities. The general idea is for applications to expose intents which bind to behaviours. An intent is like a goal in that it specifies what you want to do, rather than how to do it. Users can then choose the best means for invoking these intents. The approach relies on a taxonomies of resources and behaviours. I propose chunks for the taxonomies, and chunks & rules as a basis for low-code control: https://www.w3.org/2024/06-Raggett-immersive-web.pdf Note that this is coupled to the IoT given the notion of digital twins as virtual embodiments of physical systems, processes and people, e.g. sensors and actuators in the real world. Digital twins are part of an extended reality that allow you to interact with things whether they are real or imagined. Digital twins are associated with digital footprints that are visible in extended reality, e.g. using smart glasses. Microsoft and Meta have already demonstrated the feasibility of VR for immersive meetings, using proprietary solutions, but I think we can improve on that with higher quality rendering and open web standards. I now hope to find people to collaborate with on making this happen. In the longer term, we can expect generative AI to improve to the point where you can use simple prompts to create artistically themed VR environments. In the meantime, AI will be used to transcribe speech to text, text to speech and to automate gestures such as hand movements whilst talking. Best regards, Dave Raggett <dsr@w3.org>
Received on Friday, 21 June 2024 08:20:56 UTC