W3C

Use cases and requirements for Media Fragments

W3C Working Draft 30 March 2009

This version:
http://www.w3.org/TR/2009/WD-media-frags-reqs-20090330
Latest version:
http://www.w3.org/TR/media-frags-reqs
Editor:
@@@, @@@

Abstract

This document specifies use cases and requirements as an input for the development of the Media Fragments 1.0 specification.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is the First Public Working Draft of the Use cases and requirements for Media Fragments specification. It has been produced by the Media Fragments Working Group, which is part of the W3C Video on the Web Activity.

Please send comments about this document to public-media-fragment@w3.org mailing list (public archive).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1 Introduction
2 Terminology
3 Use Cases
    3.1 Linking to and Display of Media Fragments
4 Requirements
    4.1 Introduction for Requirements
    4.2 Functional requirements (Application Use Cases)
        4.2.1 Display of Media Fragments
            4.2.1.1 Scenario 1: Search Engine
            4.2.1.2 Scenario 2: Region of an Image
            4.2.1.3 Scenario 3: Portion of Music
            4.2.1.4 Scenario 4: Moving Windows of Interest
        4.2.2 Browsing and Bookmarking Media Fragments
            4.2.2.1 Scenario 1: Segmenting a Video
            4.2.2.2 Scenario 2: Temporal Audio Pagination
            4.2.2.3 Scenario 4: Spatial Video Pagination
            4.2.2.4 Scenario 5: Audio Passage Bookmark
            4.2.2.5 Scenario 6: Captions helps browsing Video
        4.2.3 Recompositing Media Fragments
            4.2.3.1 Scenario 1: Reframing a photo in a slideshow
            4.2.3.2 Scenario 2: Mozaic
            4.2.3.3 Scenario 3: Video Mashup
            4.2.3.4 Scenario 4: Selective previews
            4.2.3.5 Scenario 5: Music Samples
            4.2.3.6 Scenario 6: Highlighting regions (Out-Of-Scope)
        4.2.4 Annotating Media Fragments
            4.2.4.1 Scenario 1:Spatial Tagging of Images
            4.2.4.2 Scenario 2: Temporal Tagging of Audio and Video
            4.2.4.3 Scenario 3: Named Anchors
            4.2.4.4 Scenario 4: Spatial and Temporal Tagging
            4.2.4.5 Scenario 5: Search Engine
        4.2.5 Adapting Media Resources
            4.2.5.1 Scenario 1: Changing Video quality (Out-Of-Scope)
            4.2.5.2 Scenario 2: Selecting Regions in Images
            4.2.5.3 Scenario 3: Selecting an Image from a multi-part document
            4.2.5.4 Scenario 4: Retrieving an Image embedded thumbnail
            4.2.5.5 Scenario 5: Switching of Video Transmission
            4.2.5.6 Scenario 6: Toggle All Audio OFF
            4.2.5.7 Scenario 7: Toggle specific Audio tracks
    4.3 Non-functional requirements
        4.3.1 Model of a Video Resource
        4.3.2 Single Media Resource Definition
        4.3.3 Existing Standards
        4.3.4 Unique Resource
        4.3.5 Valid Resource
        4.3.6 Parent Resource
        4.3.7 Single Fragment
        4.3.8 Relevant Protocols
        4.3.9 No Recompression
        4.3.10 Minimize Impact on Existing Infrastructure
        4.3.11 Focus for Changes
        4.3.12 Browser Impact
        4.3.13 Fallback Action
    4.4 Introduction for Track fragments
        4.4.1 Track fragments
        4.4.2 Temporal fragments
        4.4.3 Spatial fragments
        4.4.4 Named fragments
        4.4.5 Evaluation of Fitnes
        4.4.6 Conditions
        4.4.7 Evaluation of Fitness
        4.4.8 Table
5 Technologies Survey
6 Naming Fragments
7 Retrieving Fragment

Appendices

A References
B References (Non-Normative)
C Acknowledgements (Non-Normative)


1 Introduction

TODO: @@@

2 Terminology

The keywords MUST, MUST NOT, SHOULD and SHOULD NOT are to be interpreted as defined in [RFC 2119].

3 Use Cases

TODO: @Silvia

See: Use Cases

3.1 Linking to and Display of Media Fragments

...

4 Requirements

Editorial note 
TODO: @Silvia + Michael + Thierry

Editorial note 
See: Requirements and Types of Fragment Addressing

4.1 Introduction for Requirements

The need for media fragment addressing in URIs originates from multiple sources. There are applications that can be enabled or enriched by the availability of media fragment URIs. There are also requirements for media fragment URIs for enabling other Web technologies to satisfy their use cases. Thus, this section describes application use cases and technology requirements in separate subsections. Further, we have added a subsection which describes side conditions that we are considering as relevant during the development of the specification. Backward compatibility: if the server and/or UA does not support fragments the full resource will be downloaded (ignore what you don't know principle)

4.2 Functional requirements (Application Use Cases)

Note:

... we should still discuss if some of these use cases are out of scope ...

4.2.1 Display of Media Fragments

A user is only interested to consume a fragment of a media resource rather than the complete resource. A media fragment URI as per http://www.ietf.org/rfc/rfc3986.txt allows to address this part of the resource directly and thus enables the User Agent to receive just the relevant fragment.

4.2.1.1 Scenario 1: Search Engine

Tim does a keyword search on a video search service. That keyword is found in several videos in the search service's collection and it relates to clips inside the videos that appear at a time offset. Tim would like the search result to point him to just these media fragments so he can watch the relevant clips rather than having to watch the full videos and manually search for the relevant clips.

4.2.1.2 Scenario 2: Region of an Image

Tim has discovered on an image hosting service a photo of his third school year class. He is keen to put a link to his own face inside this photo onto his private Web site where he is collecting old photos of himself. He does not want the full photo to be displayed and he does not want to have to download and crop the original image since he wants to reference the original resource.

4.2.1.3 Scenario 3: Portion of Music

Tim is a Last.fm user. He wants his friend Sue to listen to a cool song, Gypsy Davy. However, not really the entire song is worth it, Tim thinks. He wants Sue to listen to the last 10 seconds only and sends her an email with a link to just that subpart of the media resource.

4.2.1.4 Scenario 4: Moving Windows of Interest

Tim is now creating an analysis of the movements of muscles of horses during trotting and finds a few relevant videos online. His analysis is collected on a Web page and he'd like to reference the relevant video sections, cropped both in time and space to focus his viewers' attention on specific areas of interest that he'd like to point out.

4.2.2 Browsing and Bookmarking Media Fragments

Media resources - audio, video and even images - are often very large resources that users want to explore progressively. Progressive exploration of text is well-known in the Web space under the term "pagination". Pagination in the text space is realized by creating a series of Web pages and enabling paging through them by scripts on a server, each page having their own URI. For large media resources, such pagination can be provided by media fragment URIs, which enable direct access to media fragments.

4.2.2.1 Scenario 1: Segmenting a Video

Michael has a Website that collects recordings of the sittings of his government's parliament. These recordings tend to be very long - generally on the order of 7 hours in duration. Instead of splitting up the recordings into short files by manual inspection of the change of topics or some other segmentation approach, he prefers to provide many handles to a unique video resource. As he publishes the files however, he provides pagination on the videos such that people can watch them 20 min at a time.

4.2.2.2 Scenario 2: Temporal Audio Pagination

Lena would like to browse the descriptive audio tracks of a video like she does with Daisy audio books, by following the logical structure of the media. Audio descriptions and captions generally come in blocks either timed or separated by silences. Chapter by chapter and then section by section she eventually jumps to a specific paragraph and down to the sentence level by using the "tab" control like she would normally do in audio books. The descriptive audio track is an extra spoken track that provides a description of scenes happening in a video. When the descriptive audio track is not present, Lena can similarly browse through captions and descriptive text tracks which are either rendered through her braille reading device or through her text-to-speech engine.

4.2.2.3 Scenario 4: Spatial Video Pagination

Elaine has recorded a video mozaic of all her TV channels of an international election day. She wants to keep the original file of what all TV broadcasts are synchronously showing but now she wants to be able to make a long presentation where each channel will show, one at a time, one after another. She creates a playlist of media fragments URIs that each select a specific channel in the mozaic to play each channel after one another.

4.2.2.4 Scenario 5: Audio Passage Bookmark

Sue likes the song segment that Tim has sent her and decides to add this specific segment to her bookmarks.

When regarding media resources (in particular audio and video) as monolithic blocks, they are very inaccessible. For example, it is difficult to find out what they are about, where the highlights, or what the logical structure of the resources are. Lack of these features, in particular lack of captions and audio annotations, further make the resources inaccessible to disabled people. Introducing an ability to directly access highlights, fragments, or the logical structure of a media resource will provide a big contribution towards making a media resource more accessible.

4.2.2.5 Scenario 6: Captions helps browsing Video

Silvia has a deaf friend, Elaine, who would like to watch the holiday videos that Silvia is publishing on her website. Silvia has created subtitle tracks for her videos and also a CMML annotation with unique identifiers on the clips that she describes. The clips were formed based on locations that Silvia has visited. In this way, Elaine is able to watch the videos by going through the clips and reading the subtitles for those clips that she is interested in. She watches the sections on Korea, Australia, and France, but jumps over the ones of Great Britain and Holland.

4.2.3 Recompositing Media Fragments

As we enable direct linking to media fragments in a URI, we can also enable simple recompositing of such media fragments. Note that because the media fragments in a composition may possibly originate from different codecs and very different files, we can not realistically expect smooth playback between the fragments.

4.2.3.1 Scenario 1: Reframing a photo in a slideshow

Erik has a collection of photos and wants to create a slide show of some of the photos and wants to highlight specific areas in each video. He uses xspf to define the slide show (playlist) using spatial fragment URIs to address the photo fragments.

4.2.3.2 Scenario 2: Mozaic

Jack wants to create a mosaic for his website with all the image fragments that Erik defined collated together. He uses SMIL 3.0 Tiny Profile and the spatial fragment URIs to layout the image fragments and stitch them together as a new "image".

4.2.3.3 Scenario 3: Video Mashup

Jack has a collection of videos and wants to create a mashup from segments out of these videos without having to manually edit them together. He uses SMIL 3.0 Tiny Profile and temporal fragment URIs to address the clips out of the videos and sequence them together.

Given an ability to link to media fragments through URIs, people will want to determine whether they receive the full resource or just the data that relates to the media fragment. This is particularly the case where the resource is large, where the bandwidth is scarce or expensive, and/or where people have limited time/patience to wait until the full resource is loaded.

4.2.3.4 Scenario 4: Selective previews

Yves is a busy person. He doesn't have time to attend all meetings that he is supposed to attend. He also uses his mobile device for accessing Web resources while traveling, to make the most of his time. Some of the recent meetings that Yves was supposed to attend have been recorded and published on the Web. A colleague points out to Yves in an email which sections of the meetings he should watch. While on his next trip, Yves goes back to this email and watches the highlighted sections by simply clicking on them. The media server of his company dynamically composes a valid media resource from the URIs that Yves is sending it such that Yves' video player can play just the right fragments.

4.2.3.5 Scenario 5: Music Samples

Erik also has a music collection. He creates an "audio podcast" in the form of an RSS feed with URIs that link to samples from his music files. His friends can play back the samples in their Web-attached music players.

4.2.3.6 Scenario 6: Highlighting regions (Out-Of-Scope)

Tim has discovered yet another alumni photo of his third school year class. This time he doesn't want to crop his face but he wants to keep the photo in the context of his classmates. He wants his region of the photo highlighted and the rest grey scaled.

4.2.4 Annotating Media Fragments

Media resources typically don't just consist of the binary data. There is often a lot of textual information available that relates to the media resource. Enabling the addressing of media fragments ultimately creates a means to attach annotations to media fragments.

4.2.4.1 Scenario 1:Spatial Tagging of Images

Raphael systematically annotates some highlighted regions in his photos that depicts his friends, families, or the monuments he finds impressive. This knowledge is represented by RDF descriptions that use spatial fragment URIs to relate to the image fragments in his annotated collection. It makes it possible later to search and retrieve all these media fragment URIs that relate to one particular friend or monument.

4.2.4.2 Scenario 2: Temporal Tagging of Audio and Video

Raphael also has a collection of audio and video files of all the presentations he ever made. His RDF description collection extends to describing all the segments where he gave a demo of a software system with structured details on the demo.

NB: Time-aligned text such as captions, subtitles in multiple languages, and audio descriptions for audio and video don't have to be created as separate documents and link to each segment through a temporal URI. Such text can be made part of the media resource by the media author or delivered as a separate, but synchronised data stream to the media player. In either case, it should be made accessible in a Web page through a javascript API or access through a DOM nested browsing context of the video/audio/image element. This needs to be addressed in the HTML5 working group.

Annotating media resources on the level of a complete resource is in certain circumstances not enough. Support for annotating multimedia on the level of fragments is often desired. The definition of "anchors" (or id tags) for fragments of media resources will allow us to identify fragments by name. It allows the creation of an author-defined segmentation of the resource - an author-provided structure.

4.2.4.3 Scenario 3: Named Anchors

Raphael would like to attach an RDF-based annotation to a video fragment that is specified through an "anchor". Identifying the media fragment by name instead of through a temporal video fragment URI allows him create a more memorable URI than having to remember the time offsets.

4.2.4.4 Scenario 4: Spatial and Temporal Tagging

Guillaume uses video fragments URIs in an [MPEG-7] sign language profile to describe a moving point of interest: he wants the focus region to be the dominant hand of in a Sign Language video. Not only the series of video fragment URIs gives the coordinates and timing of the trajectory followed by the hand, it can also describe the areas of changing handshapes.

4.2.4.5 Scenario 5: Search Engine

Guillaume wants to retrieve the images of each bike present at a recent cycling event. Group photos and general shots of the event have been published online and thanks to a query in a search engine, Guillaume can now retrieve multiple individual shots of each bike in the collection.

4.2.5 Adapting Media Resources

When addressing a media resource as a user, one often has the desire not to retrieve the full resource, but only a subpart of interest. This may be a temporally or spatially consecutive subpart, but could also be e.g. a smaller bandwidth version of the same resource, a lower framerate video, a image with less colour depth or an audio file with a lower sampling rate. Media adaptation is the general term used for such server-side created versions of media resources.

4.2.5.1 Scenario 1: Changing Video quality (Out-Of-Scope)

Davy is looking for videos about allergies and would like to get previews at a lower frame rate to decide whether to download and save them in his collection. He would like to be able to specify in the URI a means of telling the media server the adaptation that he is after. For video he would like to adapt width, height, frame rate, colour depth, and temporal subpart selection. Alternatively, he may want to get just a thumbnail of the video.

Note: This scenario is out of scope for this Working Group because it requires changes be made to the actual encoded data to retrieve a "fragment". URI-based media fragments should basically be achieved through cropping of one or more byte sections.

4.2.5.2 Scenario 2: Selecting Regions in Images

Davy is interested to have precise coordinates on his browser address bar to see and pan over large-size images maps. Through the same URI scheme he can now generically address and locate different image subparts on his client side for all image types.

4.2.5.3 Scenario 3: Selecting an Image from a multi-part document

Davy is now interested in multi-resolution, multi-page medical images. He wants to select the detailed image of the toe X-rays which appear on page 7 of the TIFF document.

4.2.5.4 Scenario 4: Retrieving an Image embedded thumbnail

Davy is also interested to have the kind of preview functionality for pictures, in particular these JPEG large 10 Mega pixels files that have embedded thumbnails in them. He can now provide a fast preview by selecting the embedded thumbnail in the original image without even having to resize or create a new separate file!

4.2.5.5 Scenario 5: Switching of Video Transmission

Davy has a blind friend called Katrina. Katrina would also like to watch the videos that Davy has found, and is lucky that the videos have additional alternative audio tracks, which describe to blind users what is happening in the videos. Her Internet connection is of lower bandwidth and she would like to switch off the video track, but receive the two audio tracks (original audio plus audio annotations). She would like to do this track selection through simple changes to the URI.

4.2.5.6 Scenario 6: Toggle All Audio OFF

Sebo is Deaf and enjoys watching videos on the Web. Her friend sent her a link to a new music video URI but she doesn't want to waste time and bandwidth receiving any sounds. So when she enters the URI in her browser's address bar, she also adds an extra parameter to ignore all audio tracks without naming them by selecting the video segment only.

4.2.5.7 Scenario 7: Toggle specific Audio tracks

Davy's girlfriend is a fan of Karaoke. She would love to be able to play back videos from the Web that have a karaoke text, and two audio tracks, one each for the music and for the singer. Then she could practice the songs by playing back the complete video with all tracks, but use the video in Karaoke parties with friends where she turns off the singer's track through a simple selection of tracks in the User Agent.

4.3 Non-functional requirements

4.3.1 Model of a Video Resource

Model of a Video Resource

4.3.2 Single Media Resource Definition

We have one consistent view of what a media resource is and are only concerned with single-timeline media.

4.3.3 Existing Standards

We want to work within the boundaries of existing standards where possible, in particular within the URI specification.

4.3.4 Unique Resource

We want to specify media fragments as usable parts of a resource. One media fragment therefore

  • is not seen as a separate resource BUT it is uniquely addressable

  • is not a "secondary resource" but a selective view of an entire resource.

4.3.5 Valid Resource

We need to make sure that delivered media fragments are valid media resources by themselves and can thus be played back by existing media players / image viewers.

4.3.6 Parent Resource

We want to make it possible to access the entire resource as the "context" of a fragment via a simple change of the URI. This URI as a selective view of the resource provides a mechanism to focus on a fragment whilst hinting at the wider media context in which the fragment is included.

4.3.7 Single Fragment

A media fragments URI should create only a single "mask" onto a media resource and not a collection of potentially overlapping fragments.

4.3.8 Relevant Protocols

The main protocol we are concerend with are HTTP and RTSP since they are open protocols for media delivery.

4.3.9 No Recompression

Media fragments need to be delivered as byte-range subparts of the media resource such as to make the fragments an actual subresource of the media resource; this implies that we should avoid to decode and recompress media resource to create a fragment.

4.3.10 Minimize Impact on Existing Infrastructure

We want to minimize the necessary changes to all software in the media delivery chain: User Agents, Proxies, Media Servers.

4.3.11 Focus for Changes

We want to focus the necessary changes as much as possible on the media servers because they have to implement fragmentation support for the media formats as the most fundamental requirement for providing media fragment addressing.

4.3.12 Browser Impact

Changes to the user agent should be a one-off and not need adaptation per media encapsulation/encoding format.

4.3.13 Fallback Action

If a User Agent connects with a media fragment URI to a Media Server that does not support media fragments, the Media Server should reply with the full resource. The User Agent will then have to take action to either cancel this connection (if e.g. the media resource is too long) or do a fragment offset locally.

A User Agent that does not understand media fragment URIs will simply hand on the URI(potentially stripped off the fragment part) to the server and receive the full resource in lieu of the fragment. This may lead to unexpected behaviour with media fragment URIs in non-conformant User Agents, e.g. where a mash-up of media fragments is requested, but a sequence of the full files is played. This is acceptable during a transition phase.

4.4 Introduction for Track fragments

With media fragment addressing, we have to assume that we are dealing with compressed content delivered inside a container format - as described in the general model of a media resource.

This page describes the list of desirable media fragment addressing types that have resulted from the use cases and requirements analysis.

It further analyses what format requirements a the media resources has to adhere to in order to allow the extraction of the data that relates to that kind of addressing.

4.4.1 Track fragments

A typical media resource consists of multiple tracks of data somehow multiplexed together into the media resource. A media resource could for example consist of several audio, several video, and several textual annotation or metadata tracks. Their individual extraction / addressing is desirable in particular from a media adaptation point of view.

Whether the extraction of tracks from a media resource is supported or not depends on the container format of the media resource. Since a container format only defines a syntax and does not introduce any compression, it is always possible to describe the structures of a container format. Hence, if a container format allows the encapsulation of multiple tracks, then it is possible to describe the tracks in terms of byte ranges. Examples of such container formats are Ogg and MP4. Note that it is possible that the tracks are multiplexed, implying that a description of one track consists of a list of byte ranges. Also note that the extraction of tracks (and fragments in general) from container formats often introduces the necessity of syntax element modifications in the headers.

4.4.2 Temporal fragments

A temporal fragment of a media resource is a clipping along the time axis from a start to an end time that are within the duration of the media resource.

If a media resource supports temporal fragment extraction is in the first place dependent on the coding format and more specifically how encoding parameters were set. For video coding formats, temporal fragments can be extracted if the video stream provides random access points (i.e., a point that is not dependent on previously encoded video data, typically corresponding to an intra-coded frame) on a regular basis. The same holds true for audio coding formats, i.e., the audio stream needs to be accessed at a point where the decoder can start decoding without the need of previously coded data.

4.4.3 Spatial fragments

A spatial fragment of a media resource is a clipping of an image region. For media fragment addressing we only regard square regions.

Support for extraction of spatial fragments from a media resource in the compressed domain depends on the coding format. The coding format must allow to encode spatial regions independently from each other in order to support the extraction of these regions in the compressed domain. Note that there are currently two variants: region extraction and interactive region extraction. In the first case, the regions (i.e., Regions Of Interest, ROI) are known at encoding time and coded independently from each other. In the second case, ROIs are not known at encoding time and can be chosen by a user agent. In this case, the media resource is divided in a number of tiles, each encoded independently from each other. Subsequently, the tiles covering the desired region are extracted from the media resource.

4.4.4 Named fragments

A named fragment of a media resource is a media fragment - either a track, a time section, or a spatial region - that has been given a name through some sort of annotation mechanism. Through this name, the media fragment can be addressed in a more human-readable form.

No coding format provides support for named fragments, since naming is not part of the encoding/decoding process. Hence, we have to consider container formats for this feature. In general, if a container format allows the insertion of metadata describing the named fragments, then the container format supports named fragments, if the fragment class is also supported. For example, you can include a CMML or TimedText description in an MP4 or Ogg container and interpret this description to extract temporal fragments based on a name given to them in the description.

4.4.5 Evaluation of Fitnes

There is a large number of media codecs and encapsulation formats that we need to take into account as potential media resources on the Web. This section analyses a list of typical formats and determines which we see fit, which we see conditionally fit, and which we see unfit for supporting media fragment URIs.

4.4.6 Conditions

Media resources should fulfill the following conditions to allow extraction of fragments:

  • The media fragments can be extracted in the compressed domain.

  • No syntax element modifications in the bitstream are needed to perform the extraction.

Not all media formats will be compliant with these two conditions. Hence, we distinguish the following categories:

  1. Fit: The media resource meets the two conditions (i.e., fragments can be extracted in the compressed domain and no syntax element modifications are necessary). In this case, caching media fragments of such media resources on the byte level is possible.

  2. Conditionally fit: Media fragments can be extracted in the compressed domain, but syntax element modifications are required. These media fragments are provide cachable byte ranges for the data, but syntax element modifications are needed in headers applying to the whole media resource/fragment. In this case, these headers could be sent to the client in the first response of the server.

  3. Unfit: Media fragments cannot be extracted in the compressed domain as byte ranges. In this case, transcoding operations are necessary to extract media fragments. Since these media fragments do not create reproducable bytes, it is not possible to cache these media fragments. Note that media formats which enable extracting fragments in the compressed domain, but are not compliant with category 2 (i.e., syntax element modifications are not only applicable to the whole media resource), also belong to this category.

4.4.7 Evaluation of Fitness

In order to get a view on which media formats belong to which fitness category, an overview is provided for the media fragments currently described in State_of_the_Art/#Codecs and State_of_the_Art/#Containers. In the following table, the numbers 1, 2, and 3 correspond to the three categories described in Section #Conditions. The 'X' symbol indicates that the media format does not support a particular fragment axis.

4.4.8 Table

Media formatTrackTemporalSpatialNameRemark
H.261n/a13n/a
MPEG-1 Videon/a13n/a
H.262/MPEG-2 Videon/a13n/a
H.263n/a13n/a
MPEG-4 Visualn/a13n/a
H.264/MPEG-4 AVCn/a12n/aSpatial fragment extraction is possible with Flexible Macroblock Ordening (FMO)
AVSn/a13n/a
Diracn/a13n/a
Motion JPEGn/a13n/a
Motion JPEG2000n/a13n/aSpatial fragment extraction is possible in the compressed domain, but syntax element modifications are needed for every frame.
VC-1n/a13n/a
Theoran/a13n/a
RealVideon/a1(?)3(?)n/a
DVn/a13n/a
Betacamn/a13n/a
OMSn/a13n/a
SNOWn/a13n/a
MPEG-1 Audion/a1n/an/a
AACn/a1n/an/a
Ogg Vorbisn/a1n/an/a
FLACn/a1n/an/a
Speexn/a1n/an/a
AC-3/Dolby Digitaln/a1n/an/a
TTAn/a1n/an/a
WMAn/a1n/an/a
MLPn/a1n/an/a
JPEGn/an/a3n/a
JPEG2000n/an/a2n/a
JPEG LSn/an/a3n/a
HD Photon/an/a2n/a
GIFn/an/a3n/a
PNGn/an/a3n/a
MOV2n/an/a2QTText provides named chapters
MP42n/an/a2MPEG-4 TimedText provides named sections
3GP2n/an/a23GPP TimedText provides named sections
MPEG-21 FF2n/an/a2MPEG-21 Digital Item Declaration provides named sections
OGG2n/an/a2CMML provides named anchor points
Matroska2n/an/a2
MXF2n/an/a2
ASF2n/an/a2Marker objects provide named anchor points
AVI2n/an/aX
FLV2n/an/a2cue points provide named anchor points
RMFF1 or 2(?)n/an/a?
WAVXn/an/aX
AIFFXn/an/aX
XMF?n/an/a?
AUXn/an/aX
TIFF2n/an/a2Can store multiple images (i.e., tracks) in one file, possibility to insert "private tags" (i.e., proprietary information)

We have to deal with the complexities of codecs and media resources. Not all media types are currently capable of doing what server-side media fragments would require. Those that are capable are of interest to us. For those that aren't, the fall-back case applies (i.e. full download and then offsetting).

5 Technologies Survey

TODO: @Erik + Davy

See: Technologies Survey

6 Naming Fragments

TODO: @Michael (interim of Jack)

See: Query vs Fragment, general syntax, formal grammar and semantics ?, extreme cases and interaction with other standards such as SVG, SMIL, etc.

7 Retrieving Fragment

TODO: @Yves + Conrad + Raphael

See: 2-way / 4-way handshake and cache/proxies, client-side requirements, etc.

A References

RFC 2119
S. Bradner. Key Words for use in RFCs to Indicate Requirement Levels. IETF RFC 2119, March 1997. Available at http://www.ietf.org/rfc/rfc2119.txt.

B References (Non-Normative)

MPEG-7
Information Technology - Multimedia Content Description Interface (MPEG-7). Standard No. ISO/IEC 15938:2001, International Organization for Standardization(ISO), 2001.

C Acknowledgements (Non-Normative)

This document is the work of the W3C Media Fragments Working Group.

Members of the Working Group are (at the time of writing, and by alphabetical order): Eric Carlson (Apple, Inc.), Michael Hausenblas (DERI Galway at the National University of Ireland, Galway, Ireland), Jack Jansen (CWI), Yves Lafon (W3C/ERCIM), Erik Mannens (IBBT), Thierry Michel (W3C/ERCIM), Guillaume (Jean-Louis) Olivrin (Meraka Institute), Soohong Daniel Park (Samsung Electronics Co., Ltd.), Conrad Parker (W3C Invited Experts), Silvia Pfeiffer (W3C Invited Experts), David Singer (Apple, Inc.), Raphaël Troncy (CWI), Vassilis Tzouvaras (K-Space), Davy Van Deursen (IBBT)

The people who have contributed to discussions on public-media-fragment@w3.org are also gratefully acknowledged. In particular: Pierre-Antoine Champin, Ken Harrenstien, Henrik Nordstrom, Geoffrey Sneddon and Felix Sasaki.