RE: [Bug 23220] Add 'zoom' constraint to VideoStreamTrack

No, not from my perspective.  'zoom' would be a camera-specific constraint that would be invoked as part of the constructor of a VideoStreamTrack (see http://www.w3.org/TR/mediacapture-streams/#idl-def-VideoStreamTrack).  It is separately defined from WebRTC.

-----Original Message-----
From: cowwoc [mailto:cowwoc@bbs.darktech.org] 
Sent: Tuesday, October 01, 2013 9:15 AM
To: public-media-capture@w3.org
Subject: Re: [Bug 23220] Add 'zoom' constraint to VideoStreamTrack


     Sorry to come in on the tail of this conversation... Out of curiosity, when you talk about "zoom", are you referring to accessing camera-specific functionality such as zoom/pan/etc from within WebRTC? 
Or are we talking about something else?

Thanks,
Gili

On 01/10/2013 5:51 AM, Stefan Håkansson LK wrote:
> On 2013-09-30 02:37, Rob Manson wrote:
>> I drew this diagram recently in an attempt to put together a clear 
>> picture of how all the flows work/could work together.  It seems to 
>> resonate with quite a few people I've discussed it with.
>>
>> NOTE: This was clearly from the perspective of the post-processing 
>> pipeline scenarios we were exploring.
>> Code:
>> https://github.com/buildar/getting_started_with_webrtc#image_processi
>> ng_pipelinehtml
>> Slides:
>> http://www.slideshare.net/robman/mediastream-processing-pipelines-on-
>> the-augmented-web-platform
>>
>> But the question marks in this diagram really relate directly to this 
>> discussion (at least in the write direction).
>>
>> It would be very useful in many scenarios to be able to 
>> programatically generate a Stream (just like you can with the Web 
>> Audio API). Plus all the other scenarios this type of overall 
>> integration would unlock (zoom, etc).
>>
>> And then we would be able to use this Stream as the basis of a MediaStream.
>>
>> And then we would be able to pass this MediaStream to a 
>> PeerConnection like normal.
>>
>> There's a lot of discussion in many different threads/working groups 
>> and amongst many of the different browser developers about exactly 
>> this type of overall Stream based architecture...and the broader implications of this.
>>
>> I think this overall integration is a key issue that could gain good 
>> support and really needs to be looked into.  I'd be happy to put 
>> together a more detailed overview of this model and gather 
>> recommendations if there's no strong objections?
>>
>> It seems like this would most sensibly fit under the Media Capture 
>> TF...is this correct home for this?
> If you look at the charter [1], the fit is not that good. The WebRTC 
> WG charter [2] mentions "API functions for encoding and other 
> processing of those media streams" - which could be interpreted to fit.
>
> On the other hand the WebRTC WG has a lot on its plate already and 
> needs to focus.
>
> A third option could be to form some new group - community or working 
> - for this work.
>
> Perhaps the best thing to do is to use this list to probe the interest 
> (e.g. by putting together the detailed overview you mention), and if 
> there is interest we can discuss where to take the work.
>
> Stefan
>
> [1]
> http://lists.w3.org/Archives/Public/public-media-capture/2013Feb/0012.
> html [2] http://www.w3.org/2011/04/webrtc-charter.html
>
>> PS: I'm about to present this pipeline work as part of a workshop at 
>> the IEEE ISMAR in Adelaide so I'll share the slides once they're up.
>>
>> roBman
>>
>>
>> On 30/09/13 09:59, Silvia Pfeiffer wrote:
>>> On Mon, Sep 30, 2013 at 8:17 AM, Harald Alvestrand <harald@alvestrand.no> wrote:
>>>> On 09/29/2013 06:39 AM, Silvia Pfeiffer wrote:
>>>>> FWIW, I'd like to have the capability to zoom into a specific part 
>>>>> of a screen share and only transport that part of the screen over 
>>>>> to the peer.
>>>>>
>>>>> I'm not fuzzed if it's provided in version 1.0 or later, but I'd 
>>>>> definitely like to see such functionality supported eventually.
>>>> I think you can build that (out of strings and duct tape, kind of) 
>>>> by screencasting onto a canvas, and then re-screencasting a part of 
>>>> that canvas. I'm sure there's missing pieces in that pipeline, 
>>>> though - I don't think we have defined a way to create a 
>>>> MediaStreamTrack from a canvas yet.
>>> Interesting idea.
>>>
>>> It would be possible to pipe the canvas back into a video element, 
>>> but I'm not sure if we can as yet take video element content and add 
>>> it to a MediaStream.
>>>
>>>> It takes more than a zoom constraint to do it, however; at minimum 
>>>> you need 2 coordinate pairs (for a rectangular viewport).
>>> True. I also think we haven't standardised screensharing yet, even 
>>> if there is an implementation in Chrome.
>>>
>>> Silvia.
>>>
>>>
>

Received on Tuesday, 1 October 2013 22:29:03 UTC