- From: Elad Alon via GitHub <sysbot+gh@w3.org>
- Date: Fri, 25 Oct 2024 10:27:46 +0000
- To: public-webrtc-logs@w3.org
I think the right mental model is that we are controlling the captured surface, and CaptureController is the proxy for that concept (in all APIs we introduce), whereas MediaStreamTrack is just a handle to get frames (similarly). Those frames might not even be coming directly from the captured surface; they might be going through some transformation first, such as getting annotated, cropped or adjusted for better contrast. Is there genuine Web developer interest in displaying the video element somewhere other than in the document that first called `getDisplayMedia()`? I am not aware of such a need, so I'd rather not design for it. (Unless the current design actively prevented such later extensions, of course. I don't think this is the case, though.) -- GitHub Notification of comment by eladalon1983 Please view or discuss this issue at https://github.com/w3c/mediacapture-screen-share-extensions/issues/20#issuecomment-2437435538 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Friday, 25 October 2024 10:27:47 UTC