MediaEval 2013 Multimedia Benchmark Evaluation

--------------------------------------------------
Call for Participation
MediaEval 2013 Multimedia Benchmark Evaluation
http://www.multimediaeval.org
Regular registration deadline: 1 May 2013
--------------------------------------------------

MediaEval is a multimedia benchmark evaluation that offers tasks 
promoting research and innovation in areas related to human and social 
aspects of multimedia. MediaEval 2013 focuses on aspects of multimedia 
including and going beyond visual content, such as language, speech, 
music, and social factors. Participants carry out one or more of the 
tasks offered and submit runs to be evaluated. They then write up their 
results and present them at the MediaEval 2013 workshop.

For each task, participants receive a task definition, task data and 
accompanying resources (dependent on task) such as shot boundaries, 
keyframes, visual features, speech transcripts and social metadata. In 
order to encourage participants to develop techniques that push forward 
the state-of-the-art, a "required reading" list of papers will be 
provided for each task. Participation is open to all interested research 
groups. Please sign up via http://www.multimediaeval.org (Regular 
registration will remain open until 1 May.)

The following tasks are available to participants at MediaEval 2013:

*Social Event Detection for Social Multimedia*
This task requires participants to discover social events and organize 
the related media items in event-specific clusters, within a collection 
of Web multimedia. Social events are events that are planned by people, 
attended by people and for which the social multimedia are also captured 
by people.

*Search and Hyperlinking of Television Content*
This task requires participants to find video segments relevant to an 
information need and to provide a list of useful hyperlinks for each of 
these segments. This year we focus on television data provided by the 
BBC and real information needs from home users.

*Placing: Geo-coordinate Prediction for Social Multimedia*
This task requires participants to estimate the geographical coordinates 
(latitude and longitude) of media items (images and videos), as well as 
predicting how “placeable” a media item actually is. The Placing Task 
integrates all aspects of multimedia: textual meta-data, audio, image, 
video, location, time, users and context.

*Violent Scenes Detection in Film (Affect Task)*
This task requires participants to automatically detect portions of 
movies depicting violence. Participants are encouraged to deploy 
multimodal approaches (audio, visual, text) to solve the task.

*Visual Privacy: Preserving Privacy in Surveillance Videos*
For this task, participants will need to propose methods for obscuring 
identifying elements on people in videos so that they are rendered 
unrecognizable in a manner that is perceived as appropriate to human 
viewers of the footage.

*Spoken Web Search: Spoken Term Detection for Low Resource Languages*
The task involves searching FOR audio content WITHIN audio content USING 
an audio content query. This task is particularly interesting for speech 
researchers in the area of spoken term detection or low-resource speech 
processing.

*(New!) Question Answering for the Spoken Web*
The problem that we wish to explore in this task is how best to build an 
information retrieval system in which both the queries and the content 
are spoken. The task challenges the research community’s ability to 
build ranked retrieval systems for matching spoken questions with spoken 
answers based on topical matching.

*(New!) Soundtrack Selection for Commercials (MusiClef Task)*
Given a TV commercial, participants are required to predict the most 
suitable music soundtrack from a list of candidate songs. A multimodal 
dataset will be provided involving both context- and content-based 
information, such as audio features, visual features, web pages, social 
tags and microblog information, related to brands, products, artists and 
songs.

*(New!) Similar Segments of Social Speech*
This task involves searching in social multimedia, specifically 
conversations between students in one academic department. This task is 
the first exploration of social search in multimedia, and the first 
social spoken dialog retrieval task not assuming term-based search.

*(New!) Retrieving Diverse Social Images*
This task requires participants to automatically refine a ranked list of 
Internet photos using provided visual and textual information. The 
objective is to select only a compact sub-set of photos that are equally 
representative matches but also diverse representations of the query.

*(New!) Emotion in Music (also an Affect Task)*
This task is a new task on emotional characterization of music. Given a 
set of songs, participants are asked to automatically generate emotional 
representations.

*(New!) Crowdsourcing for Social Multimedia*
This task requires participants to create ground truth from raw labels 
that have been generated by workers on a commercial crowdsourcing platform.

Tasks marked "New!" are the 2013 Brave New Tasks. If you sign up for 
these tasks, please be aware that you will be asked to keep in close 
touch with the task organizers concerning the details of the task over 
the course of the benchmarking cycle. We ask for extra-tight 
communication in order to ensure that these tasks have the flexibility 
they need to reach their goals.

MediaEval 2013 Timeline
(dates vary slightly from task to task, see the individual task pages 
for the individual deadlines: http://www.multimediaeval.org/mediaeval2013)

March-May: Registration and return usage agreements.
May-June: Release of development/training data.
June-July: Release of test data.
Mid-Sept.: Participants submit their completed runs.
Mid-Sept.-End-Sept.: Evaluation of submitted runs. Participants write 
their 2-page working notes papers.
18-19 October: MediaEval 2013 Workshop, Barcelona, Spain
Please note: The workshop is timed so that it is possible to attend both 
ACM Multimedia 2013 (http://acmmm13.org/) and the MediaEval 2013 
workshop in the same trip.

Contact
For questions or additional information please contact Martha Larson 
m.a.larson@tudelft.nl or visit visit http://www.multimediaeval.org

MediaEval 2013 Organization Committee:

Martha Larson at Delft University of Technology and Gareth Jones at 
Dublin City University act as the overall coordinators of MediaEval. 
Individual tasks are coordinated by a group of task organizers, who form 
the MediaEval Organizing Committee. It is collective efforts of this 
group of people that makes MediaEval possible. The complete list of 
MediaEval organizers is at:

http://www.multimediaeval.org/who/

A large number of projects make a contribution to MediaEval 
organization, including (alphabetically): AXES 
(http://www.axes-project.eu), Chorus+ (http://www.ist-chorus.org), 
CUbRIK (http://www.cubrikproject.eu/), CNGL (http://www.cngl.ie), Glocal 
(http://www.glocal-project.eu), LinkedTV (http://www.linkedtv.eu), Media 
Mixer (http://mediamixer.eu), Mucke 
(http://www.chistera.eu/projects/mucke), Promise 
(http://www.promise-noe.eu), Quaero (http://www.quaero.org), Sealinc 
Media (http://www.commit-nl.nl), SocialSensor 
(http://www.socialsensor.org), and VideoSense (http://www.videosense.eu).

-- 
Prof. Dr. Philipp Cimiano
Semantic Computing Group
Excellence Cluster - Cognitive Interaction Technology (CITEC)
University of Bielefeld

Phone: +49 521 106 12249
Fax: +49 521 106 12412
Mail: cimiano@cit-ec.uni-bielefeld.de

Room H-127
Morgenbreede 39
33615 Bielefeld

Received on Tuesday, 23 April 2013 09:43:27 UTC