- From: youennf via GitHub <sysbot+gh@w3.org>
- Date: Tue, 14 Dec 2021 13:55:56 +0000
- To: public-webrtc-logs@w3.org
I am restricting the discussion to getDisplayMedia tracks, getViewPortMedia model is potentially different enough that I am unclear whether such a mechanism would be useful for getViewPortMedia. I am also assuming there will be some sort of CaptureHandle API to allow some form of communication between getDisplayMedia capturer and capturee. The main drawback I see with the above cropping approach is that it requires tight coupling between capturee and capturer: they need to create a communication channel and define a custom protocol/API to exchange the region information. We should aim at decoupling as much as possible capturer and capturee through a standard API. Capturee can declare regions of interest through an API a la CaptureHandle, with the same origin protection mechanism. Capturer would get and use that information from the MediaStreamTrack object itself. Some additional thoughts: - On capturer side, having a single object to handle (MediaStreamTrack) is nice, for instance in case the track is transferred to other contexts, say workers. - I wonder whether it makes sense to allow User Agents to use that mechanism to expose subregions of window surfaces (say for instance a browser window MediaStreamTrack would expose a region that only includes the web page content, not the User Agent UI)? Might be somehow redundant with constraints though. -- GitHub Notification of comment by youennf Please view or discuss this issue at https://github.com/w3c/mediacapture-screen-share/issues/195#issuecomment-993566799 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Tuesday, 14 December 2021 13:55:57 UTC