Re: Use Cases and Requirements Wiki page updated

Hi Guillaume,

thanks for clarifying - I will include further changes to the WD to
make it clear.

I am not going to bother with the wiki any longer - I don't think we
need to keep the two in sync and the wiki is just a snapshot of where
our thinking was about a month ago.

Also, I think our thinking has changed slightly in recent weeks. We
have started considering not just to use the fragment specification to
get a subpart of the original resource, but also to use the query
specification to get a converted resource. I will have to include this
thinking into the use cases and specify for the immediate out-of-scope
cases whether they could be realised with a query syntax and that that
may be work for the future.

Thanks for the quick reply!

Cheers,
Silvia.

On Thu, Mar 12, 2009 at 2:46 AM, Guillaume Olivrin
<golivrin@meraka.org.za> wrote:
> Hello Silvia,
>
>>
>> Use Case 2.4 (really: 3) Spatial Video Pagination - Guillaume, can you
>> explain how that is spatial pagination? I don't quite understand what
>> Elaine is actually doing with the video.
> Ok, let me try again:
> The idea of "spatial pagination" is to cut a video image into areas,
> then play each area as a full video, one after another.
>
> For example, Elaine has a video mozaic which shows 4 channels. In her
> original video, all 4 channels are synchronised and playing at the same
> time, each occupy a quarter of the screen. Now Elaine wants to see all 4
> channels "full screen", one channel at a time.
> With the help of Media Fragments she selects in her original video each
> one of the 4 video regions separately, and then queues each new video
> fragments into a playlist. Now she can watch each video region
> separately and sequentially.
>
>> Use Case 4.5 Search Engine - I don't understand how that fits within
>> the section. The section is about annotating, but this one essentially
>> just re-describes what was already described in Use Case 1.1 Search
>> Engine. Can you explain where you see the annotations coming in?
>
> I guess it's the same as UC 1.1 and we can remove UC 4.5.
> I can't really justify why it would make a different use case.
> A typical Web search engines will return a Media Fragment URI if this
> URI has been used in context of the Keyword e.g. In Web context : <a
> href="mymediafragment.ogv">This is a nice Bike</a>
> UC 4.5 is just a case of RDF or MPEG7 annotation schema, which Search
> Engines will learn to retrieve URI from eventually just like they do
> from HTML.
>
>> I wonder if Use Cases 5.3 and 5.4 are out of scope.
> These two UCs, it is true, are not really covered in the 'Model of a
> Video Resource' diagram as it is now.
> It is not a case of Time, nor Space, nor Track selection but rather a
> case of Frame selection.
>
> Another hybrid case, that of animated GIFs : we can use time= so select
> a specific "time portion" of a GIF. But do we want to be able to select
> a specific image/frame out of GIF? The semantics are different.
>
> Would it be technically difficult to cover these cases ?
> These UCs do have very practical applications in mind, but they may fall
> more under initiatives such as the WebCGM ...
>
> As it is, we haven't worked on addressing these cases in the syntax
> either. So maybe we should mark them 'out of scope' a posteriori. This
> could be work for a later stage and another release of Media Fragments.
>
>
> Thank you Silvia for including all the UCs in the draft.
> Should we now reflect these changes you've suggested in both Wiki and
> Draft?
>
> Regards,
> Guillaume
>
>
>

Received on Wednesday, 11 March 2009 22:27:45 UTC