Re: temporal fragments

Hi Al, all,

Al Gilman wrote:
> At 12:37 PM 2003-03-21, Larry Masinter wrote:
> 
>> I think I finally understood 'the issue' (or at least 'an issue').
>> Currently, the interpretation of fragment identifiers is defined
>> by the media type "retrieved" (at least with HTTP and FTP). Fragment
>> identifiers are not defined for use with URI schemes that don't
>> "retrieve" a representation in a particular media type.
>>
>> I think your proposal is that, at least for 'rtsp', that it
>> might make sense to define a general mechanism for fragment
>> identifiers that are associated with the _scheme_ instead of
>> the media type. I think the idea (as I understand it) is that
>> the rtsp fragments might be somewhat independent of the
>> 'media type', and instead be uniformly applied to any resource
>> accessed via rtsp.
>>
>> Is that a fair characterization of the issue?
>>
>> I'm trying to focus on "how the URI spec might change" issues
>> rather than the specific details of the temporal fragments
>> themselves.
> 
> 
> I understand the separation of concerns as you are attempting it,
> Larry, but it is not yet the best way to separate the concerns.
> 
> These temporal media all belong to one class in the sense that
> no matter what the sense of the data or its representation, they
> all have a relationship between the data and some sort of a
> time scale.
> 
> The "some sort of a time scale" is measured in one of many *interoperable*
> ways as is demonstrated by the integration of multiple media objects with
> different internal time representations into a single time-coherent
> presentation within a SMIL presentation [composite document].
> 
> So all of these 'resources' share a common 'supertype' or class
> with important within-class interoperability properties.  So this
> class of 'temporal media objects' is a valid category to define
> 'time offset notation(s)' for _as a class_ and not severally.
> 
> The common class is constructed by first creating the timescale class
> from a collection of interoperable time representations, and then
> creating the temporal media class from the abstraction of a relation
> between one datum which is of some timescale type and another datum
> which is of any type.
> 
> The time-indexing or time-relatedness functionality is not peculiar to the
> scheme or protocol such as RTSP.
> 
> It is generic across the class of time-indexed information representations.
> That is the highest and best way to view the separation of concerns or
> domain of applicability of this device.
> 
> There are scheme-specific specializations of the sense of the notation,
> but there is more in common than peculiar across the uses for different
> schemes.
> 
> Al

Thanks. That summarises some of the reasons why we came up with the 
requirement of temporal fragment addressing.

  As I like to think of it, there are two common types of 
time-continuous resources on the Internet/Web:

- files that contain all the time-continuous data required for playback 
(such as MP3, MPEG-1/2 video, QuickTime, AVI etc.) and

- "streams" that get composed through the user agent at playback time 
(such as SMIL or rtp/rtsp streams).

Both share the property that "time" has a deeply semantic meaning to 
them and that people require to hyperlink deeply into such data to 
access the actual content of such files. As an example, Web search 
engines could actually display hyperlinks to clips of video or audio 
files in search results if we had the possibility to write up a URI to 
the time offset at which the queried string had some relevance. There is 
currently no generic way of addressing temporal offsets - therefore our 
proposal.

Best Regards,

Silvia.


And now for a bit of a rant on the off-topic SMIL discussion:

> PS:  By the way, Silvia, your reading of the SMIL timing model as
> only applying to authoring and not controlling play is erroneous.
 >
> There are markup provisions for different tolerance levels of
> asynchrony among the several media objects during play, whether
> having one stream arriving slightly out of kilter breaks the whole
> display and the composite is invalid or whether some slop may be
> allowed and the ensemble should press on regardless.   So please
> talk with someone who really does SMIL, unlike me who only
> critiques it, to get the fine points in the model down.
> 
> The SMIL file should be thought of as a a composition directive to the
> player, not an authoring device.  The several media objects integrated may
> not be 'authored' at all but be real-time data streamed from afar and
> integrated at the stream-receiving node through the synchronization rules
> set out in the SMIL.

Sorry for that misunderstanding. That's not what I meant. I meant that 
it is indeed controling the timing during playback, but that a SMIL 
file, which is an XML file, does not contain the time-continuous data 
itself by its very nature and that therefore a SMIL file is a way of 
storing authored directives for a presentation rather than being an 
actual recording of a presentaiton. Maybe you prefer the term 
"composition" to "authoring" for this situation. All I was trying to 
point out is that while I am aware that SMIL has a timing model, SMIL in 
itself does not provide addressing (through URIs) of temporal offsets of 
time-continuous resources, and is therefore striclty speaking not "prior 
art".

Received on Monday, 7 April 2003 20:39:43 UTC