[whatwg] Peer-to-peer communication, video conferencing, <device>, and related topics

There are also tablet devices with stereo cameras on the back and single
on the front too.  Stereo will become increasingly common.


roBman


On Wed, 2011-07-06 at 10:55 +0530, Shwetank Dixit wrote:
> On Fri, 18 Mar 2011 19:32:49 +0530, Lachlan Hunt  
> <lachlan.hunt at lachy.id.au> wrote:
> 
> > On 2011-03-18 05:45, Ian Hickson wrote:
> >> On Thu, 16 Sep 2010, Jonathan Dixon wrote:
> >>> Further, it could be useful to provide a way to query the video source
> >>> as to whether the camera is oriented relative to the screen (if the
> >>> underlying system knows; consider a phone device with both a main  
> >>> camera
> >>> and self-view camera). This is needed to drive the decision on whether
> >>> to do this horizontal flip or not. In fact, such an application may  
> >>> want
> >>> to somehow indicate a preference for the self-view camera when multiple
> >>> cameras are present in the selection list. c.f. a movie-making app  
> >>> which
> >>> would prefer the outward facing camera.
> >>
> >> Interesting.
> >>
> >> In getUserMedia() the input is extensible; we could definitely add
> >> "prefer-user-view" or "prefer-environment-view" flags to the method  
> >> (with
> >> better names, hopefully, but consider that 'rear' and 'front' are
> >> misleading terms -- the front camera on a DSLR faces outward from the
> >> user, the front camera on a mobile phone faces toward the user). The  
> >> user
> >> still has to OK the use of the device, though, so maybe it should just  
> >> be
> >> left up to the user to pick the camera? They'll need to be able to  
> >> switch
> >> it on the fly, too, which again argues to make this a UA feature.
> >
> > We could just add flags to the options string like this:
> >
> > "video;view=user, audio" or "video;view=environment, audio"
> >
> > It's worth pointing out that The HTML Media Capture draft from the DAP  
> > WG uses the terms "camera" and "camcorder" for this purpose, but I find  
> > these terms to be very ambiguous and inappropriate, and so we should not  
> > use them here.
> Just wanted to know whether there is any consensus on this or not? Mobile  
> phones are coming out with dual cameras (front and back facing) and  
> depending on the use case, the developer might want access to either the  
> front or back one. (For example, for a simple camera app, a back facing  
> will do, but for a web conferencing app, the front facing will be  
> required). At least, the developer should be able to specify which one to  
> enable by default, which then can be changed the user if needed.
> 
> Another question is flash. As far as I have seen, there seems to be no  
> option to specify whether the camera needs to use flash or not. Is this  
> decision left up to the device? (If someone is making an app which is just  
> clicking a picture of the person, then it would be nice to have the camera  
> use flash in low light conditions).
> >
> > http://dev.w3.org/2009/dap/camera/
> >
> >> Similarly for exposing the kind of stream: we could add to  
> >> GeneratedStream
> >> an attribute that reports this kind of thing. What is the most useful  
> >> way
> >> of exposing this information?
> >
> > I'm not entirely clear about what the use cases are for knowing if the  
> > camera is either user-view or environment-view.  It seems the more  
> > useful information to know is the orientation of the camera.  If the  
> > user switches cameras, that could also be handled by firing orientation  
> > events.
> >
> >> Lachlan Hunt wrote:
> >>> There are some use cases for which it would be useful to know the
> >>> precise orientation of the camera, such as augmented reality
> >>> applications.  The camera orientation may be independent of the  
> >>> device's
> >>> orientation, and so the existing device orientation API may not be
> >>> sufficient.
> >>
> >> It seems like the best way to extend this would be to have the Device
> >> Orientation API apply to GeneratedStream objects, either by just having
> >> the events also fire on GeneratedStream objects, or by having the API be
> >> based on a pull model rather than a push model and exposing an object on
> >> GeneratedStream objects as well as Window objects.
> >
> > This could work.  But it would make more sense if there were an object  
> > representing the device itself, as in Rich's proposal, and for the  
> > events to be fired on that object instead of the stream.
> >
> >> On Mon, 24 Jan 2011, Anne van Kesteren wrote:
> >>>
> >>> There is a plan of allowing direct assigning to IDL attributes besides
> >>> creating URLs.
> >>>
> >>> I.e. being able to do:
> >>>
> >>>   audio.src = blob
> >>>
> >>> (The src content attribute would then be something like  
> >>> "about:objecturl".)
> >>>
> >>> I am not sure if that API should work differently from creating URLs  
> >>> and
> >>> assigning those, but we could consider it.
> >>
> >> Could you elaborate on this plan?
> >
> > This is basically what Philip and I were discussing in the other thread  
> > yesterday, where we avoid the unnecessary overhead of creating a magic  
> > URL, and instead just assign the object directly to the src property.  
> > This lets the implementation handle all the magic transparently in the  
> > background, without bothering to expose a URLs string to the author.
> >
> > This is what we had implemented in our experimental implementation of  
> > the <device> element, and now getUserMedia.
> >
> > i.e.
> >
> > <video></video>
> > <script>
> > var v = document.querySelector("video");
> > navigator.getUserMedia("video", function(stream) {
> >    v.src = stream;
> >    v.play();
> > });
> > </script>
> >
> > The getter for v.src then returns "about:streamurl".
> >
> > My understanding is that we don't really want to have to implement the  
> > create/revokeObjectURL() methods for this.
> >
> >> On Wed, 16 Feb 2011, Anne van Kesteren wrote:
> >>> This is just a thought. Instead of acquiring a Stream object
> >>> asynchronously there always is one available showing transparent black
> >>> or some such. E.g. navigator.cameraStream. It also inherits from
> >>> EventTarget. Then on the Stream object you have methods to request
> >>> camera access which triggers some asynchronous UI. Once granted an
> >>> appropriately named event is dispatched on Stream indicating you now
> >>> have access to an actual stream. When the user decides it is enough and
> >>> turns of the camera (or something else happens) some other  
> >>> appropriately
> >>> named event is dispatched on Stream again turning it transparent black
> >> again.
> >>
> >> This is a very interesting idea.
> >
> > This suggests that there would be a separate property available for the  
> > microphone, and any other input device.  This differs from the existing  
> > spec, which allowed a single stream to represent both audio and video.
> >
> >> On Mon, 14 Mar 2011, Lachlan Hunt wrote:
> >>> The API includes both readystatechange event, as well as independent
> >>> events for play, paused and ended.  This redundancy is unnecessary.  
> >>> This
> >>> is also inconsistent with the design of the HTMLMediaElement API, which
> >>> does not include a readystatechange event in favour on separate events
> >>> only.
> >>
> >> I've dropped readystatechange.
> >>
> >> I expect to drop play and pause events if we move to the model described
> >> above that pauses and resumes audio and video separately.
> >
> > It may still be useful to have events for this, if the event object had  
> > a property that indicated which type of stream it applied to, or if  
> > there were separate objects for both the audio and video streams.
> >
> 
> 

Received on Tuesday, 5 July 2011 22:56:49 UTC