Review: Use Case & Requirements Draft

Dear Media Fragmenters,

This is my review for the document: "Use Cases & Requirements Draft" 
[1]. I have read the revision as of "15:02, 24 October 2008".
[Thanks Silvia for having made this document. It contains great stuff!]

* Section 1.1:
   - (intro): I don't really like the term "secondary resource". I 
understand what you mean but the terms 'primary' and 'secondary' are 
sometimes ambiguous, and even in the broadcast world, they use them with 
a different semantics (secondary resources meaning the textual documents 
that go with the primary video resource). I would suggest to use instead 
the term 'part of', so: "A media fragment URI allows to address this 
part of the resource directly and thus enables the User Agent to provide 
AND PLAY just the relevant fragment".
   - (scenario 2): I'm not sure we want to say that only the region of 
an image should be displayed. What about saying: "Tim wants the region 
of the photo highlighted and the rest grey scaled"?

* Section 1.2:
   - This use case seems to be an application of 1.1, that is, linking 
for bookmarking. What about specializing 1.1 with this one? I will have 
the same remark for 1.4, see below.

* Section 1.3:
   - (scenario 2): It is an interesting accessibility scenario, but I 
think the description should be a bit extended. What do you mean by 
"audio representations"? Audio tracks? Additional audio annotations? 
Both? When you said "... by using the tab control ...", do you have in 
mind a screen reader? Since this is not obvious for the casual reader, I 
would suggest to describe what exactly do you mean.

* Section 1.4:
   - This use case is again for me an application of 1.1, that is this 
time linking for recomposing (making playlist)
   - (scenario 2): I let Jack answer regarding the possibility of SMIL 
to be used as a background image ;-)
   - (scenario 4): Should we mention some formats for composing playlist 
of mp3 songs, formats that would allow to make use of media fragment URIs?

* Section 1.5:
   - (scenario 1): I think we should describe further this use case. For 
example, precise that Raphael does not create RDF descriptions of 
objects of his photos, but rather systematically annotates some 
highlighted regions in his photo that depicts his friends, families, or 
the monuments he finds impressive. This could be then further linked to 
the search use case (Tim).

* Section 1.6:
   - (scenario 1): I find it out-of-scope. I think it is worth to let it 
in the document but to say that this is where we think it is out of 
scope ... if everybody agrees :-)
   - (scenario 2): I find this scenario also really borderline / 
out-of-scope.  As it has been pointed out during the face to face 
meeting in Cannes, the interactivity seems to be the most important 
aspect in the map use cases (reflecting by zooming in/out, panning over, 
etc.) and I guess we don't want that in our URI scheme. Do we?
   - (scenario 3 and 4): I love them since they introduce the need for 
localizing tracks within media, but I would suggest to merge these 2 
scenarios. Are they supposed to express different needs? I cannot see that.

* Section 2:

I have hard time to understand what do you mean with these technology 
requirements. I understand the need for enabling other Web technologies 
to satisfy their use cases but I'm not sure this is strong enough to 
make a next headline. Actually, I can easily see all the subsections 
merged with the existing use cases, see below. Therefore, I would 
suggest to remove the section 2.

* Section 2.1:
   - (scenario 1): this scenario introduces the need for having fragment 
names (or labels) in addition to their boundaries specifications in the 
URIs. I think we could add this scenario in the section 1.5 related to 
the annotations of media fragments.

* Section 2.2:
   - this scenario seems to me connected to the scenario 2 of the 
section 1.2 (bookmarking media fragments)
   - CMML needs a reference

* Section 2.3:
   - this scenario seems to me connected to the scenario 3 of the 
section 1.4 (media recomposition)
   - typo: "... while travelling to make the most of his time." => "... 
while traveLing to make the most of his time USEFUL".

* Section 3.1:
   - I suggest to add a schema that corresponds to Silvia's drawing [2]

* Section 3.3:
   - "secondary" => same remark than previously, what about using the 
term "part of a resource" to designate the fragment?
   - Do we also want to cover the case where the user wants to 
explicitly create a new resource?

* Section 3.5:
   - "secondary" => same  remark than previously.

* Section 3.7:
   - Do we have access to referencable figures and/or estimates 
regarding  how the video traffic is spread in Internet nowadays in terms 
of protocols? I mean, how much traffic goes through http, rtsp, p2p? 
Silvia, does your company provide such stats?

* Section 3.8:
   - I find this requirement very strong and I feel we are still 
discussing the issue. Perhaps we can phrase that as: "we should avoid to 
decode and recompress media resource" ?

Hope that helps!
Best regards.

   Raphaël

[1] 
http://www.w3.org/2008/WebVideo/Fragments/wiki/Use_Cases_%26_Requirements_Draft
[2] 
https://wiki.mozilla.org/images/thumb/4/40/Model_Video_Resource.jpg/800px-Model_Video_Resource.jpg

-- 
Raphaël Troncy
CWI (Centre for Mathematics and Computer Science),
Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
e-mail: raphael.troncy@cwi.nl & raphael.troncy@gmail.com
Tel: +31 (0)20 - 592 4093
Fax: +31 (0)20 - 592 4312
Web: http://www.cwi.nl/~troncy/

Received on Friday, 7 November 2008 17:35:43 UTC