W3C home > Mailing lists > Public > semantic-web@w3.org > April 2019

CFP: 1st International Workshop on AI for Smart TV Content Production, Access and Delivery (AI4TV 2019)

From: Raphaël Troncy <raphael.troncy@eurecom.fr>
Date: Sat, 27 Apr 2019 15:44:13 +0200
To: Semantic Web <semantic-web@w3.org>
Message-ID: <33cc6efd-1442-b510-0798-5e27fa9ad724@eurecom.fr>
Apologies for cross-posting

====================================================================

CFP: 1st International Workshop on AI for Smart TV Content Production, 
Access and Delivery (AI4TV 2019)
co-located with the 27th ACM International Conference on Multimedia
Nice, France, October 21-25, 2019

https://memad.eu/ai4tv2019/

Deadlines:
- Submission Due:            Monday 08 July 2019
- Acceptance Notification:   Monday 05 August 2019

Objective/goals of the workshop:
Technological developments in comprehensive video understanding – 
detecting and identifying visual elements of a scene, combined with 
audio understanding (music, speech), as well as aligned with textual 
information such as captions, subtitles, etc. – have been undergoing a 
significant revolution during recent years. New scientific breakthroughs 
in video understanding through the application of AI techniques along 
with the increase in the volume of multimedia content and more 
computational power have led to significant improvements in automated 
video description and have opened fresh avenues for the seamless 
combination of multiple modalities’ analysis.

The workshop aims to bring together experts from academia and industry 
in order to discuss the latest research progress in topics related to 
multimodal information analysis, and in particular, semantic analysis of 
video, audio, and textual information for smart digital TV content 
production, access and delivery. Such topics include, but are not 
limited to, the following multimedia analysis techniques for streamed TV 
and radio programmes  as well as TV archives (recorded content):
  * Multimodal content analysis: scene segmentation, people and concept 
recognition, topic identification using video, audio and/or (textual) 
metadata
  * Embeddings for Multimedia Knowledge Graph
  * Use or adaptation of multimedia description models or vocabularies 
for machine learning / neural networks
  * Combination of AI and external knowledge (graphs) for improved 
multimedia analysis
  * Automatic multimedia summarization and remixing
  * Automatic deep captioning
  * Interactive multimodal search and browsing in archives
  * Hyperlinking and enrichment of TV content
  * Breaking the language barrier of TV content using multimodal translation
  * Comparative evaluations of AI techniques for multimodal analysis tasks
  * Creation of multimedia benchmarks for AI evaluations
  * Gender studies on TV and Radio programmes

The main goal of the workshop is to promote AI techniques for multimedia 
analysis to enable smarter content production, access and delivery with 
the emphasis on large TV and radio program archives. We, thus, welcome 
submissions from both industry and academia, including interdisciplinary 
work and those from other relevant main streams.

Submission:
Submissions are made on the AI4TV2019 EasyChair page, 
https://easychair.org/conferences?conf=ai4tv

Paper format:
Submitted papers (.pdf format) must use the ACM Article Template. Please 
remember to add Concepts and Keywords 
https://www.acm.org/publications/proceedings-template

Length:
The AI4TV workshop will welcome two kinds of submissions:
  * Research papers which can be 6 to 8 pages. Up to two additional 
pages may be added for references. The reference pages must only contain 
references. Optionally, you may upload supplementary material that 
complements your submission (100Mb limit).
  * Demo papers which can be 2 pages.

Blinding:
Paper submissions must conform with the “double-blind” review policy. 
This means that the authors should not know the names of the reviewers 
of their papers, and reviewers should not know the names of the authors. 
Please prepare your paper in a way that preserves anonymity of the authors.
     Do not put the authors’ names under the title.
     Avoid using phrases such as “our previous work” when referring to 
earlier publications by the authors.
     Remove information that may identify the authors in the 
acknowledgments (e.g., co-workers and grant IDs).
     Check supplemental material (e.g., titles in the video clips, or 
supplementary documents) for information that may identify the authors’ 
identities.
     Avoid providing links to websites that identify the authors.
Papers without appropriate blinding will be rejected without review.

Originality:
Papers submitted to ACM Multimedia must be the original work of the 
authors. The may not be simultaneously under review elsewhere. 
Publications that have been peer-reviewed and have appeared at other 
conferences or workshops may not be submitted to ACM Multimedia (see 
also the arXiv/Archive policy below). Authors should be aware that ACM 
has a strict policy with regard to plagiarism and self-plagiarism 
(https://www.acm.org/publications/policies/plagiarism). The authors’ 
prior work must be cited appropriately.

Author list:
Please ensure that you submit your papers with the full and final list 
of authors in the correct order. The author list registered for each 
submission is not allowed to change in any way after the paper 
submission deadline. (Note that this rule regards the identity of 
authors, e.g., typos are correctable.)

Proofreading:
Please proofread your submission carefully. It is essential that the 
language use  in the paper is clear and correct so that it is easily 
understandable. (Either US English or UK English spelling conventions 
are acceptable.)

ArXiv/archive policy:
In accordance with ACM guidelines, all SIGMM-sponsored conferences 
adhere to the following policy regarding arXiv papers:

We define a publication as a written piece documenting scientific work 
that was submitted for review by peers for either acceptance or 
rejection, and, after review, has been accepted. Documentation of 
scientific work that is published in a not-for-profit archive without 
any form of peer-review (departmental Technical Report, arXiv.org, etc.) 
  is not considered a publication. However, this definition of 
publication does include peer-reviewed workshop papers, even if they do 
not appear in formal proceedings. Any submission to ACM Multimedia must 
not have substantial overlap with prior publications or other work 
currently undergoing peer review.

Note that documents published on website archives are subject to change. 
Citing such documents is discouraged. Furthermore, ACM Multimedia will 
review the documents formally submitted and any additional information 
in a web archive version will not affect the review.

Programme Committee (to be confirmed):
* Lora Aroyo, Google, USA
* Olivier Aubert, University of Nantes, France
* Werner Bailer, Joanneum Research, Austria
* Louay Bassbouss, Fraunhofer FOKUS, Germany
* Marco Bertini, Universita’ degli Studi di Firenze, Italy
* Jean Carrive, INA, France
* Mikko Kurimo, Aalto University, Finland
* Tiina Lindh-Knuutila, Lingosft, Finland
* Erik Mannens, Universiy of Ghent, Belgium
* Johan Oomen, Nederland Sounds and Vision, The Netherlands
* Symeon Papadopoulos, CERTH, Greece
* Basil Philipp, Genistat, Swisszerland
* Harald Sack, KIT, Germany
* Thomas Steiner, Google, Germany
* Jörg Tiedemann, University of Helsinki, Finland
* Dieter Van Rijsselbergen, Limecraft, Belgium

Organizers:
* Raphaël Troncy, EURECOM, France
* Jorma Laaksonen, Aalto University, Finland
* Hamed R. Tavakoli, Aalto University, Finland
* Lyndon Nixon, MODUL Technology GmbH, Austria
* Vasileios Mezaris, CERTH-ITI, Greece

-- 
Raphaël Troncy
EURECOM, Campus SophiaTech
Data Science Department
450 route des Chappes, 06410 Biot, France.
e-mail: raphael.troncy@eurecom.fr & raphael.troncy@gmail.com
Tel: +33 (0)4 - 9300 8242
Fax: +33 (0)4 - 9000 8200
Web: http://www.eurecom.fr/~troncy/
Received on Saturday, 27 April 2019 13:44:39 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:45:58 UTC