Re: SMIL section of state-of-the-art document done

Hi Davy, all,

On Tue, Oct 28, 2008 at 1:32 AM, Davy Van Deursen
<davy.vandeursen@ugent.be> wrote:
>>> On 27 okt 2008, at 12:11, Silvia Pfeiffer wrote:
>>BTW: Davy - I'd be curious if your meta-specification format of the
>>structure of audio & video could be mapped into ROE somehow...
>
> The model for audio and video resources that I have developed is not
> designed from a fragment addressing point of view. It is made from an
> adaptation point of view and is closely related with the structure of the
> media resources and addresses a resource in terms of bytes.

I suppose what I meant was that your structure will be much deeper and
more detailed down to they byte level. However, I assumed you would
also need to cover the more high-level structure, such as different
tracks.

I agree that the aim of ROE is very different. I just thought that a
comparison may be instructional.

> A trivial
> version of such a model could be that a video resource consists of a list of
> frames, and that for each video frame the start offset and length in terms
> of bytes are included. The CMML/ROE solution does not provide direct links
> to the bytes (I guess this is left over to the application)

ROE is indeed not meant to operate on that level. The byte mapping is
encoding format dependent and therefore left to the application.

What is interesting about your format (btw: does it have a name?) is
that it could be used as information to hand off to Web proxies in
parallel with the media byte stream and provides it with information
on how to do byte ranges and time ranges.

> implying that it
> will not be trivial to map my model in ROE ;-). My model could help with the
> implementation of adaptation software interpreting the URI scheme we are
> defining while ROE could be a way to address tracks in a media resource
> using this URI scheme. I think that ROE could be compared to the logical
> model described in MPEG-21 part 17 (see also on the wiki [1]): it allows you
> to describe the structure of media resource (i.e., container format) and
> provides a mean to address parts of this structure in an URI scheme.

This describes the MPEG-21 fragment identification in detail:
http://www.chiariglione.org/mpeg/working_documents/mpeg-21/fid/fid-is.zip

I agree. ROE probably compares to the underlying model that MPEG-21
documents, which however is not specified in part 17 but only implied.
The MPEG-21 model is also more detailed than ROE, which really only
specifies the tracks and how they get mapped into the media resource.

Cheers,
Silvia.

>
> [1]
> http://www.w3.org/2008/WebVideo/Fragments/wiki/State_of_the_Art#MPEG-21_Part
> _17:_Fragment_Identification_of_MPEG_Resources_.28Davy_.2F_Silvia.29.
>
>
> Best regards,
>
> Davy
>
> --
> Davy Van Deursen
>
> Ghent University - IBBT
> Department of Electronics and Information Systems Multimedia Lab
> URL: http://multimedialab.elis.ugent.be
>
>

Received on Monday, 27 October 2008 21:55:26 UTC