Deadline Approaching: Special Issue on Social Image and Video Content Analysis - EURASIP Journal on Image and Video Processing

[Apologies for cross-postings. Please forward this mail to anyone
interested.]
 
--------------------------------------------------------
CALL FOR PAPERS
EURASIP Journal on Image and Video Processing
Special Issue on Social Image and Video Content Analysis
http://www.hindawi.com/journals/ivp/si/sivca.html
--------------------------------------------------------
 
The performance of image and video analysis algorithms for content
understanding has improved considerably over the last decade and their
practical applications are already appearing in large-scale professional
multimedia databases. However, the emergence and growing popularity of
social networks and Web 2.0 applications, coupled with the ubiquity of
affordable media capture, has recently stimulated huge growth in the amount
of personal content available. This content brings very different challenges
compared to professionally authored content: it is unstructured (i.e., it
needs not conform to a generally accepted high-level syntax), typically
complementary sources are available when it is captured or published, and it
features the "user-in-the-loop" at all stages of the content life-cycle
(capture, editing, publishing, and sharing). To date, user provided
metadata, tagging, rating and so on are typically used to index content in
such environments. Automated analysis has not been widely deployed yet, as
research is needed to adapt existing approaches to address these new
challenges.
 
Research directions such as multimodal fusion, collaborative computing,
using location or acquisition metadata, personal and social context, tags,
and other contextual information, are currently being explored in such
environments. As the Web has become a massive source of multimedia content,
the research community responded by developing automated methods that
collect and organize ground truth collections of content, vocabularies, and
so on, and similar initiatives are now required for social content. The
challenge will be to demonstrate that such methods can provide a more
powerful experience for the user, generate awareness, and pave the way for
innovative future applications.
 
This issue calls for high quality, original contributions focusing on image
and video analysis in large scale, distributed, social networking, and web
environments. We particularly welcome papers that explore information
fusion, collaborative techniques, or context analysis.
 
TOPICS OF INTEREST
------------------
 
Topics of interest include, but are not limited to:
 
    * Image and video analysis using acquisition, location, and contextual
metadata
    * Using collection contextual cues to constrain segmentation and
classification
    * Fusion of textual, audio, and numeric data in visual content analysis
    * Knowledge-driven analysis and reasoning in social network environments
    * Classification, structuring, and abstraction of large-scale,
heterogeneous visual content
    * Multimodal person detection and behavior analysis for individuals and
groups
    * Collaborative visual content annotation and ground truth generation
using analysis tools
    * User profile modeling in social network environments and personalized
visual search
    * Visual content analysis employing social interaction and community
behavior models
    * Using folksonomies, tagging, and social navigation for visual analysis
 
SUBMISSION
----------
 
Authors should follow the EURASIP Journal on Image and Video Processing
manuscript format described at http://www.hindawi.com/journals/ivp/.
Prospective authors should submit an electronic copy of their complete
manuscripts through the journal Manuscript Tracking System at
http://mts.hindawi.com/, according to the following timetable:
 
IMPORTANT DATES
---------------
 
    * Manuscript Due:  June 1, 2008
    * First Round of Reviews: September 1, 2008
    * Publication Date:  December 1, 2008
 
GUEST EDITORS
-------------
 
    * Yannis Avrithis, National Technical University of Athens, Athens,
Greece
    * Yiannis Kompatsiaris, Informatics and Telematics Institute,
Thermi-Thessaloniki, Greece
    * Noel O'Connor, Centre for Digital Video Processing, Dublin City
University, Dublin, Ireland
    * Jan Nesvadba, Philips, Eindhoven, The Netherlands 

Received on Thursday, 22 May 2008 14:24:15 UTC