- From: Adam Sobieski <adamsobieski@hotmail.com>
- Date: Sun, 24 Sep 2023 01:54:41 +0000
- To: "public-webrtc@w3.org" <public-webrtc@w3.org>
- Message-ID: <SJ0P223MB0687C1BB6967C1DCD7E069CFC5FDA@SJ0P223MB0687.NAMP223.PROD.OUTLOOK.COM>
WebRTC Working Group, Hello. With respect to WebRTC’s extended use cases [2], I would like to share some artificial intelligence topics for consideration. WebRTC can enable use cases involving human and AI operators controlling physical robots and virtual, digital avatars. Specific use cases, in these regards, include, but are not limited to: telerobotics, virtual presence, robotics (cloud and fog robotics), videogame AI, simulation, and evaluation. As considered, sensor data could be streamed from physical robots and virtual, digital avatars to human and AI operators and there would exist types of tracks beyond audio and video [3]. Examples of sensor data include, but are not limited to: 1D/2D/3D range finders, 3D/RGB-D sensors, point-cloud sensors, environmental sensors, olfactometers, motion-capture sensors, pose-estimation sensors, position/velocity/acceleration sensors, force/torque/touch sensors, power-supply sensors, and RFID sensors [3]. At a lower level of abstraction, control signals could be transmitted from human and AI operators to the physical and simulated actuators of physical robots and virtual, digital avatars. ROS 2 provides examples of state-of-the-art control APIs, in these regards, e.g. [4]. At a higher level of abstraction, human and AI operators might receive from on-board AI systems representations of environments, sets of recognized objects, their affordances, and other available actions that a physical robot or virtual, digital avatar could perform. These data could be described as involving dynamic planning domains. Human or AI operators could transmit to physical robots or virtual, digital avatars their selected actions, sequences of action, or simple or complex plans. AI software aboard physical robots and virtual, digital avatars could make use of cloud and fog computing resources with respect to remote-procedure calls and more complex workflows and orchestrations. WebRTC could enable robot-to-robot communication scenarios. There are also next-generation videogame AI scenarios to consider. AI creatures and non-player characters could be run on the cloud and utilize WebRTC to communicate with virtual, digital environments including those shared with human players. There are also cloud-to-cloud computing scenarios to consider with respect to the training and evaluation of AI operators. AI operators and simulated environments for training and evaluation could be run on separate clouds and communicate with one another via WebRTC. Thank you. Best regards, Adam Sobieski http://www.phoster.com [1] https://www.rfc-editor.org/rfc/rfc7478 [2] https://w3c.github.io/webrtc-nv-use-cases/ [3] https://wiki.ros.org/Sensors<https://wiki.ros.org/Sensors> [4] https://wiki.ros.org/ros_control
Received on Sunday, 24 September 2023 01:54:50 UTC