- From: Philipp Hoschka <hoschka@w3.org>
- Date: Fri, 07 Nov 1997 12:07:33 +0100
- To: www-multimedia@w3.org
The first draft of a language for describing synchronized multimedia
presentations is available at
http://www.w3.org/TR/WD-smil
This draft was produced by the W3C working group on Synchronized Multimedia.
Comments/Feedback from people on this list are *very* welcome. They should be sent
to www-multimedia@w3.org.
From the introduction:
"SMIL allows integrating a set of independent multimedia objects into a synchronized
multimedia presentation. Using SMIL, presentations such as a slide show synchronized
with audio comments or a video synchronized with a text stream can be described.
A typical SMIL presentation has the following characteristics:
- The presentation is composed of several components that are accessible via a URL,
e.g. files stored on an http or rtsp server.
- The components have different media types, such as audio, video, image or text.
- The begin and end times of different components have to be synchronized with
events in other components. For example, in a slide show, a particular slide
is displayed when the narrator in the audio starts talking about it.
- The user can control the presentation by using control buttons known from
video-recorders, such as stop, fast-forward and rewind. Additional functions are
"random access", i.e. the presentation can be started anywhere, and "slow motion",
i.e. the presentation is played slower than at its original speed.
- The user can follow hyper-links embedded in the presentation
SMIL has been designed so that it is easy to author simple presentations with a text
editor. The key to success for HTML was that attractive hypertext content could be
created without requiring a sophisticated authoring tool. SMIL achieves the same for
synchronized hypermedia."
Received on Friday, 7 November 1997 06:07:51 UTC