- From: Harald Alvestrand <harald@alvestrand.no>
- Date: Fri, 20 Jan 2012 14:54:52 +0100
- To: public-media-capture@w3.org
I have gone through the scenarios doc (version of Jan 19, sorry for not
updating), and have some comments. This is transcribing from my
marked-up copy, so forgive me for commenting in serial order rather than
in order of importance.
Overall, I'm happy with the document as it stands!
- Section 1, scope: I think that you should not claim that networking
scenarios are out of scope for this task force, rather they should be
included by reference; what comes out of the task force must satisfy
both the scenarios described here and the scenarios described in the
WebRTC scenarios doc.
- Section 2.1: Permisssions - you are assuming, both here and in the
other scenarios, that ask-before-use is the only possible permission
mechanism. This is not aligned with draft-ietf-rtcweb-security-01
section A.3.2, where a means of granting permanent access to a
particular service is a MUST. This is to avoid training the user to
"always click through", which is a perennial problem with security dialogs.
- Section 2.2 Election podcast - you don't specify whether the recording
is happening locally or at a server. These alternatives might be
mentioned under "Variations" - the Hangouts On Air service is very close
to the proposed service, and does recording at server.
- Section 2.3 Find the ball - if you really want low resolution, 640x480
might not be a good example.
- Section 2.4 Video diary - one thing I was thinking while reading this
was Picture-in-Picture approaches - which would feed directly into
manipulation functions on the stream while it is being recorded. Perhaps
mention under Variations?
- Section 2.4 Conference call product debate - this seems like a
videoconference scenario with a recorder. You might want to point to the
RTCWEB videoconference scenarios (with or without server) for discussion
about non-recording concepts.
- Section 4 Concepts:
Stream: I suggest you don't define "Stream". That word is used entirely
too much, with too many meanings. Just don't use it. The text can be
moved under MediaStream.
Virtualized device: I think this section mixes up the concept of
shareable devices and devices which have manipulatable state. We can
have shareable cameras and non-shareable microphones. I suggest finding
two terms here.
- Section 5.5: Applying pre-processing doesn't require the UA to do it
if the UA provides a means of presenting a media stream in a known
format on a known interface, and consume the media stream again after
transformation that can be done outside the UA. Implementations that do
this have been demonstrated (face recognizers in JS, for instance).
- Section 5.6.3 Examples. If the canvas functions defined here work
today, that should be made clear. It's not easy to see now whether these
are real examples or suggestions for future extensions.
- Section 5.7.1 Privacy. The *developer* is probably not the term you're
looking for here about who information is exposed to; the developer is
not involved when an application runs. You might be thinking about the
application.
That's all my commentary so far!
Harald
Received on Friday, 20 January 2012 13:55:30 UTC