[![W3C][1]][2] # Media Capture Task Force Teleconference ## 07 Jun 2012 [Agenda][3] See also: [IRC log][4] ## Attendees Present Adam, Dan_Burnett, Frederick_Hirsch, Harald_Alvestrand, Jim_Barnett, Josh_Soref, Randell_Jesup, Stefan_Hakansson, Yang, anant, derf, dom Regrets Chair Stefan_Hakansson, Harald_Alvestrand Scribe Josh_Soref ## Contents * [Topics][5] 1. [Approve minutes][6] 2. [Requirements][7] 3. [Media Stream][8] 4. [Resource reservation][9] 5. [Constraints][10] 6. [Feedback to WebRTC/RTCWeb][11] 7. [Open Items][12] * [Summary of Action Items][13] * * * Date: 07 June 2012 proposed agenda at [http://lists.w3.org/Archives/Public/public- media-capture/2012Jun/0020.html][3] ### Approve minutes **RESOLUTION: May 9 minutes approved** ### Requirements hta: the big item for today is requirements ... is travis here? Yang: I've posted something on the list [Help with capturing requirements from scenarios, from Harald ][14] ...: the requirements need more work hta: we had an original set of requirements from the WebRTC working group ... we had a task to extract requiremets that would apply to this TF ScribeNick: Josh_Soref hta: We have a set of UCs that we agreed upon ... and we're in the process of generating a set of requirements from those UCs ... we kind of implicitly understand what those requirements are ... but the process of actually writing the actual text of the requirements ... has taken much more time than expected now we get the requirement one by one UC, right? hta: so what we need to discuss on this call ... is how we need to specify requirements ... and what the requirements are Yang: this conference call will specify detailed requirements hta: we've finished the work of generating scenarios Yang: ok, i see ... i sent some requirements to the list stefanh: i think there were at least 3 people who sent proposed requirements to the list ... i guess we should discuss if that's the right level of requirements ... with the right level of detail in them hta: i think that once we have gathered requirements from all the scenarios ... we need to put them together and de-duplicate them ... and link them back to the scenarios ... and see what are the higher level abstractions from the requirements ... and what parts of the system aren't in our scope Yang: i agree with that [is Travis still offering to act as an editor for requirements?] Yang: can we go through the scenario 2.5? [2.5 Conference call product debate (multiple conversations and capture review)][15] hta: we can look at that yes Yang: i posted a message to the list [Requirements extracted from scenario "conference call"][16] Yang: requirement to directly assess the video of a user ... without opening a new window hta: we need to figure out what we're doing in this group ... and what is expected to be done elsewhere in the ecosystem ... you mentioned that a user could request recording from the secretary ... the process of requesting is outside the scope of this group ... but the process of recording a stream and sharing it with someone ... may be in scope for our WG Yang: ok hta: we should mention which things we expect to be handled ... by others ... [specifically which others] ... it's good to do this ... because sometimes people say "oh, we don't have plans to do that" Something bad happened to my microphone. I'm back now. burn: we will never have these problems with WebRTC, right? [ laughter ] hta: as I was saying ... a lot of these requirements revolve around Storing and Retrieving ... Audio and Video ... which probably means saving to File or equivalent Yang: saving media to file or equivalent ... do we need to get permission from the source of the media? ... [ if it's being streamed from them to the side that wants to save it ] jesup: I don't think that's something we should specify here ... certain countries/localities have restrictions about that ... but it's way too complex to insert in the protocol anant: even if we specify that ... there's no way to enforce it ... we should leave it to the web app [ General Agreement ] hta: when you're talking about sending over the network ... it's also in the realm of the RTCWeb group ... not this WG stefanh: how do we do this now? ... we can't go through the requirements in this meeting ... it will take too much time ... we should do this offline and ask Travis if he can integrate them in the document ... and if he doesn't have time, find someone else to do it hta: we might want to ask if anyone knows they have time/could do it Jim: i can ... but if travis is going to do it, i should communicate with him hta: to us as chairs, it's not so important who does it, so much that it gets done stefanh: i assume we have to check with travis ... and if he has limited time, we come back to Jim **ACTION:** hta to ask travis if he can integrate collated requirements into his document, otherwise to Jim [recorded in [http://www.w3.org/2012/06/07 -mediacap-minutes.html#action01][17]] Created ACTION-4 - Ask travis if he can integrate collated requirements into his document, otherwise to Jim [on Harald Alvestrand - due 2012-06-14]. hta: we should also figure out if there are some requirements ... that would be too onerous to do in version one stefanh: I added requirements about being able to pan audio in 3d ... but maybe that isn't required in the first phase ... as far as we know now, that would need something from the Audio WG jesup: anything involving something like that is something that comes after Capture ... and I don't think it needs to be specified in this TF Yang: for 3d, do you mean visualizing sound in 3D? [sound spatialisation ] Yang: i also agree realized audio would related to the Audio WG hta: we had one volunteer to work with this ... i suggest we move on ### Media Stream Jim: I thought the idea of Media Streams assigned to certain media elements [Jim's proposal][18] Jim: there are certain restrictions ... and I thought we could produce a table ... relating to their values ... there are several questions that came up ... Media Elements are referenced by URL ... and a question that came up related to direct assignment ... in the current version, you must create a URI and pass in the URI ... another thing, when you create a URL, you can Revoke it later ... I presume that Revoking the URL ... doesn't change the field ... because changing the source element triggers a long process ... we need to figure out what happens in that case ... there was an original proposal from Opera that was linked ... in Seekable attribute, there ... are problems for non seekable streams ... and I had things return 0 to indicate that the stream couldn't be seeked ... another problem is that Media Streams don't have text tracks ... but they're optional hta: I suspect they might have them in a year or two Jim: if you had a real time speech recognition ... system, you could produce text Yang: if a UA doesn't have a certain feature, then you don't use it ... seek time/seek rate Jim: are you agreeing that Seekable start, end, time should be 0? hta: what is the definition of current time? Jim: it's supposed to be the current position in the stream hta: if that has to increment, then seekable start+end should return current time ... if not, then 0 Jim: i think current time increases in real time linearly ... of course, you can't seek forward in this ... but, could you want to buffer? [HTML5 currentTime attribute on MediaElement][19] derf: i don't think we want that at all ... the thing on the media element ... should be what is playing in real time ... right now stefanh: i agree with that Jim: i agree, that would be separate hta: i'd suggest we say explicitly that there is no buffer Jim: i think that's one thing i have to add to this ... when you pause the stream, and then resume ... it doesn't buffer, and i need to add a statement on that jesup: i agree ... on seekable, i think you might be less confused ... if you have start + end always return current time Jim: you're saying seekable length should return 0 jesup: that's less likely to confuse implementations ... that use it to generate UI elements ... either that or you return an error ... you're talking about things that are effectively buggy in the first place ... the argument is equally valid Josh_Soref, you wanted to say that throwing is more likely to break UI elements hta: this table is a great table to have, the question now is ... where should we insert it in the spec? ... is it a new section? stefanh: I think it's a new section Jim: is it an appendix or something? hta: I think it deserves a section ... a section that talks about interaction between MediaStream and Jim: ok (the partial interface url {} could go under there) hta: i suggest we charge one of our editors to work with Jim to insert this into the spec ... do we have a volunteer editor? **ACTION:** burn to work with Jim to integrate Jim 's table into the spec [recorded in [http://www.w3.org/2012/06/07-mediacap- minutes.html#action02][20]] Created ACTION-5 - Work with Jim to integrate Jim 's table into the spec [on Daniel Burnett - due 2012-06-14]. stefanh: Jim and burn will do this ### Resource reservation stefanh: anant, you made a new proposal and integrated it into the specification anant: there are two points i added to the document ... they're non-normative ... first, we suggested [http://dev.w3.org/2011/webrtc/editor/getusermedia.html #implementation-suggestions][21] anant: when a resource has been used to provide to the given page ... that it should be marked as busy ... and subsequent requests within the page or elsewhere ... to assign the resource to an element ... should result in a busy ... and i followed up w/ a suggestion that the UA indicate to the User that ... the resource is busy and allow the user to reassign the resource to the new requester ... the second suggestion is for non hardware resources ... such as using a file picker to assign a stream ... we had a discussion at the last telco ... a media stream can have multiple tracks ... which thus have multiple hardware resources ... an app could prompt repeatedly for getUserMedia ... and then merge them ... after I sent that out ... I think we should define something ... around letting web pages determine how many audio/video sources it can have ... i wouldn't be comfortable revealing resolutions ... hta had a proposal that i liked ... specifying Max XXX to it [Grabbing exactly one camera, from Harald][22] adambe: how is this compatible with the rest of the constraint structure? ... if you ask for 2 cameras ... and have a constraint ... and one camera is high res, and one isn't anant: I think that the constraints would apply to both ... if you as a web developer want to accept different constraints ... you should make two calls ... say a web developer has 2