RE: SMIL section of state-of-the-art document done

Hi Silvia, all,

>-----Original Message-----
>From: Silvia Pfeiffer [mailto:silviapfeiffer1@gmail.com]
>Sent: Monday, October 27, 2008 10:55 PM
>To: Davy Van Deursen
>Cc: Media Fragment
>Subject: Re: SMIL section of state-of-the-art document done
>
>Hi Davy, all,
>
>On Tue, Oct 28, 2008 at 1:32 AM, Davy Van Deursen
><davy.vandeursen@ugent.be> wrote:
>>>> On 27 okt 2008, at 12:11, Silvia Pfeiffer wrote:
>>>BTW: Davy - I'd be curious if your meta-specification format of the
>>>structure of audio & video could be mapped into ROE somehow...
>>
>> The model for audio and video resources that I have developed is not
>> designed from a fragment addressing point of view. It is made from an
>> adaptation point of view and is closely related with the structure of
>the
>> media resources and addresses a resource in terms of bytes.
>
>I suppose what I meant was that your structure will be much deeper and
>more detailed down to they byte level. However, I assumed you would
>also need to cover the more high-level structure, such as different
>tracks.
Correct, for the moment, we only support elementary bitstreams. An extended
version of the model could cover different tracks in a container format. We
are working on that ;-).

>
>I agree that the aim of ROE is very different. I just thought that a
>comparison may be instructional.
>
>> A trivial
>> version of such a model could be that a video resource consists of a
>list of
>> frames, and that for each video frame the start offset and length in
>terms
>> of bytes are included. The CMML/ROE solution does not provide direct
>links
>> to the bytes (I guess this is left over to the application)
>
>ROE is indeed not meant to operate on that level. The byte mapping is
>encoding format dependent and therefore left to the application.
>
>What is interesting about your format (btw: does it have a name?) is
>that it could be used as information to hand off to Web proxies in
>parallel with the media byte stream and provides it with information
>on how to do byte ranges and time ranges.
Our format does not have a real name, I call it "model for multimedia
bitstreams" :-). I fully agree that this could be meaningful information for
Web proxies to perform the necessary adaptations. Note that there is already
some work done regarding generic network adaptation nodes in [1] and [2]. 

[1] M. Ransburg, C. Timmerer, H. Hellwagner, and S. Devillers. Design and
evaluation of a metadata-driven adaptation node. In Proceedings 8th
International Workshop on Image Analysis for Multimedia Interactive Services
(WIAMIS), pages 83-86, Santorini, Greece, June 2007.

[2] R. Kuschnig, I. Kofler, M. Ransburg, H. Hellwagner. Design options and
comparison of in-network H.264/SVC adaptation, Journal of Visual
Communication and Image Representation. In Press, Corrected Proof, Available
online 5 August 2008.

Best regards,

Davy

-- 
Davy Van Deursen

Ghent University - IBBT
Department of Electronics and Information Systems Multimedia Lab
URL: http://multimedialab.elis.ugent.be

Received on Tuesday, 28 October 2008 10:46:15 UTC