- From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- Date: Fri, 7 Nov 2008 21:58:49 +1100
- To: olivier.aubert@liris.cnrs.fr
- Cc: public-media-fragment@w3.org
On Fri, Nov 7, 2008 at 8:53 PM, Olivier Aubert <olivier.aubert@liris.cnrs.fr> wrote: > > Hello all > >> I think we are theorizing a lot and are not actually looking at >> concrete codecs. We should start getting our hands dirty. ;-) By which >> I mean: start classifying the different codecs according to the >> criteria that you have listed above and find out for which we are >> actually able to do fragments and what types of fragments. > It is not only a matter of codec, but also a problem of container > format. Ogg was conceived to be streamed, and each Ogg page contains the > time offset of the contained data. MPEG TS/PS is also conceived to be > streamed. But AVI causes more trouble, because its index (the definition > of the location of data, i.e. basically the source of time to byte > mapping) is located at the end of the file. > > IMO, the simplest approach wrt. caches is not to try to put too much > intelligence in them, and consider that they simply store chunks of > data. Players on the client side are perfectly able to do byte-based > HTTP Range requests, just like they would do a lseek() when accessing a > local movie file. The http access module of VLC optimizes this, for > instance. > > Cheers, > Olivier > I couldn't agree more - on all accounts. What I meant with codecs is both, codecs and encapsulation formats (I think Davy understood that). As for caches - only dealing with byte ranges requires a 4-way-handshake (or two roundtrips), which is not regarded as optimal (though I still believe it's the only realistic means forward). We are now discussing the possibilities of introduction of more intelligent proxies and a new time range parameter to get that down to a 2-way-handshake (or one roundtrip). Cheers, Silvia.
Received on Friday, 7 November 2008 10:59:30 UTC