Re: MF implementation with python-gst

Latest release is here: http://www.pitivi.org/wiki/0.13.1

On Wed, Aug 12, 2009 at 7:57 AM, Silvia
Pfeiffer<silviapfeiffer1@gmail.com> wrote:
> Hi Guillaume,
>
> You should really talk to Edward Hervey, the main developer of PiTiVi. Why?
>
> GStreamer is an excellent media framework to decode, encode,
> manipulate, and play back video. However, it doesn't easily do much of
> the type of video (and audio) manipulation that we require directly in
> the compressed domain for media fragments.
>
> Edward has, however, implemented a python library that extends
> GStreamer to do these kind of things. It means that you can do, for
> example, time offsets in Ogg without needing to decode it.
>
> I can introduce you to Ed - but you might want to read a bit about the
> available documentation. It may be a bit outdated, though, I fear. I
> spoke with Ed at OVC and he attended FOMS in January where he
> explained what he has done / is still planning to do with PiTiVi.
>
> Cheers,
> Silvia.
>
> On Wed, Aug 12, 2009 at 1:15 AM, Guillaume
> Olivrin<golivrin@meraka.org.za> wrote:
>> Dear Fragmenters,
>>
>> I have looked into the python-gst library to see if it was feasible to
>> easily create Media Fragments with Gstreamer .
>>
>> I considered 3 levels of implementation :
>>
>> 1. High level - using direct GST pipeline elements
>> 2. Middle level - using GST programmatically (python-gst)
>> 3. Low level - implementing new plugins for GST
>>
>> The principal features that make Gstreamer attractive are:
>> * Handles great many media formats
>> * Can handle URI
>> * Possibility of integration with Jack's MF syntax parser implemented
>> with python-url
>> * Programmable interface with Python
>> * Most importantly modularity : healthy separation between
>> Media/Protocol access, Demuxing, Decoding etc ...
>>
>> --
>> 1. Using available GST elements
>>
>> There are already existing GST plugins to Crop a video or to Seek a
>> specific Start and End position in a Audio or Video media :
>>
>> * videocrop:  aspectratiocrop: aspectratiocrop and  videocrop: Crop
>> * debug:  navseek: Seek based on left-right arrows
>>
>> The problem is that, as far as I know, these two plugins are only usable
>> behind a decoder, i.e. using raw YUV or RGB video and PCM audio.
>>
>> We want to be able to do these operations directly on the media stream
>> without decoding and re-encoding it. To do that, we need to place
>> ourselves directly behind Demuxers elements. Demuxers know about
>> specific Audio or Video files and can parse the structure of the
>> internal compressed media stream, providing information about TIME-BYTE
>> offsets. There are two other things we can do : send events to the
>> pipeline programmatically (2) or create new GST plugins that fit behind
>> demuxers.
>>
>>
>> --
>> 2. Programmatically with Python
>>
>> Media Fragment along the Time Axis.
>>
>> Depending on the plugin involved in the GST pipeline, it is possible to
>> perform SEEK operations on the stream using the following unit formats :
>>
>>        'undefined' / 'GST_FORMAT_UNDEFINED', 'default' /
>>        'GST_FORMAT_DEFAULT', 'bytes' / 'GST_FORMAT_BYTES', 'time' /
>>        'GST_FORMAT_TIME', 'buffers' / 'GST_FORMAT_BUFFERS', 'percent' /
>>        'GST_FORMAT_PERCENT'
>>
>> Also, there are different SeekType and SeekFlags to change the seeking
>> techniques, mode and accuracy. More info at
>> http://gtk2-perl.sourceforge.net/doc/pod/GStreamer/Event/Seek.html
>> It is implemented through the following function:
>>
>>        event = gst.event_new_seek(Rate, Units,
>>                                           Flags,
>>                                           gst.SEEK_TYPE_SET, ClipBegin,
>>                                           gst.SEEK_TYPE_SET, ClipEnd)
>>        res = self.player.send_event(event)
>>        self.player.set_state(gst.STATE_PLAYING).
>>
>> OR
>>        gst_element_seek(
>>        pipeline,
>>        Rate,
>>        GST_FORMAT_TIME,
>>        Flags,
>>        GST_SEEK_TYPE_SET, pos,
>>        GST_SEEK_TYPE_SET, dur);
>>
>>
>> Both commands will send the SEEK event to the whole pipeline and some
>> GST elements will be able to handle it. But we might want to be more
>> precise and know exactly which elements can handle seek and what are
>> their capabilities.
>>
>> For example, can SEEK events be used at the level of DEMUXERs ?
>> source | DEMUXER | sink
>>            ^
>>           SEEK event
>>
>> E.G. Consider the following GST chain for OGG :
>>
>> filesrc | oggdemux |
>> filesrc | oggdemux |
>>
>>
>> The questions that must be further investigated are:
>>
>> * Which GST elements can handle seek events?
>> * What unit formats (time ns (nano seconds), frames, bytes, percents,
>> buffers) are supported by each GST elements?
>> * Can all encapsulation specific demuxers handle time and bytes?
>> * Can SEEK events be translated higher up in the chain into BYTES on the
>> filesrc SOURCE? Then we could still decode the media, find the actual
>> part of the stream required, make sure a filesrc or uridecodebin in
>> random access can point to the fragment of the media we need, and SINK
>> that MF into a filesink.
>>
>>
>> Until now I haven't been successful in implementing the GST SEEK events
>> on a variety of media types ;  neither directly in C or in Python) with
>> gst.event_new_seek(..) or gst_element_seek(..).
>>
>> --
>> 3. Writing and Compiling new GST plugins
>>
>> For Video Cropping, filters at BYTE/STREAM levels behind demuxers ?
>>
>> It is likely that to perform crop operations on a video stream without
>> touching it, we will need specific pluginS to put behind demuxers for
>> each type of video streams. This certainly represents quite a bit of
>> work.
>>
>> A possibility to investigate : could there be again a pipeline PULL
>> action that request only these bits required for the cropped up video to
>> be pulled and sunk back into a file / pipe ?
>>
>> Best Regards,
>> Guillaume
>>
>>
>>
>>
>

Received on Tuesday, 11 August 2009 22:01:39 UTC