[Icmi2014:24] 4th Call for papers for Workshops at ICMI 2014

== Please accept our apologies if you receive multiple copies of this message ==

4th Call for papers for Workshops at ICMI 2014

The ICMI workshop program aims to provide researchers with a more informal and discussion oriented forum to discuss emerging topics
in multimodal interaction or revisit established research areas from a new angle. This year we received a record number of
high-quality workshop submissions and selected the six workshops to be held on the last day of the conference. In addition, the
results of the EmotiW Challenge will be presented on the first day of the conference in an extended workshop-style session. This
will allow the participants to hold in-depth discussions on selected topics.

The Second Emotion Recognition In The Wild Workshop 2014
Organizers: Abhinav Dhall, Roland Goecke, Jyoti Joshi, Karan Sikka and Tom Gedeon 

Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such
lab controlled data poorly represents the environment and conditions faced in real-world situations. With the increase in the number
of video clips online, it is worthwhile to explore and discuss the methods and challenges in emotion recognition Űin the wildÝ. The
goal of this workshop is to invite researchers to submit their original unpublished work based around the theme of the emotion
recognition in the wild. Scope of the workshop: Multimodal emotion recognition in the wild; Analysis paper based on EmotiW challenge
data; Vision based temporal emotion analysis in the wild; Vision based static facial expression analysis in the wild; Audio based
emotion recognition; New emotional data corpus representing real-world conditions; Facial feature tracking in the wild; Emotion
recognition applications.

Workshop papers submission:  August 1st, 2014 

Multimodal, Multi-Party, Real-World Human-Robot Interaction
Organizers: Mary Ellen Foster, Manuel Giuliani, and Ron Petrick 

The development of robots capable of interacting with humans has made tremendous progress in the last decade, leading to an
expectation that in the near future, robots will be increasingly deployed in public spaces, for example as receptionists, shop
assistants, waiters, or bartenders. In these scenarios, robots must necessarily deal with situations that require human-robot
interactions that are short and dynamic, and where the robot has to be able to deal with multiple persons at once. To support this
form of interaction, robots typically require specific skills, including robust video and audio processing, fast reasoning and
decision-making mechanisms, and natural and safe output path planning algorithms. This physically embodied, dynamic, real-world
context is the most challenging possible domain for multimodal interaction: for example, the state of the physical environment may
change at any time; the input sensors must deal with noisy and uncertain input; while the robot platform must combine interactive
social behaviour with physical task-based action such as moving and grasping. This workshop aims to bring together researchers from
a range of relevant disciplines in order to explore the challenges and solutions for multimodal interaction in this area from
different perspectives.

Workshop papers submission:  July 22nd, 2014 (extended) 

Understanding and modeling multiparty, multimodal interactions
Organizers: Samer Al Moubayed, Dan Bohus, Anna Esposito, Dirk Heylen, Maria Koutsombogera, Harris Papageorgiou, Gabriel Skantze 

Analysis and understanding of human-human conversations has stressed the importance of modeling all available verbal and non-verbal
signals occurring in conversations to develop human-machine interfaces that are capable of interpreting the multimodal signals in
human conversations as well as generating natural and synchronized responses. While the focus of research has been primarily to
dyadic interactions, multiparty interactions as communicative setups involving more than two participants are a complex, yet
challenging construct that merits attention. Understanding and modelling the multiparty configuration and the underlying affective
and social behavior of the participants address the design of interfaces that (a) can follow and participate in the conversation,
(b) present interactional skills to control the interaction flow, (c) respond to it in the appropriate timing and as naturally as
possible, (d) keep track of the multimodal conversation of the participants as well as (e ) guarantee a high and balanced level of
involvement between them. This workshop aims to explore this growing area of multiparty multimodal interaction by bridging this
multidisciplinary area and bringing together researchers from domains of dialog systems, multimodal conversation analysis,
multimodal user interfaces and multimodal signal processing.

Workshop papers submission:  July 15th, 2014 

Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges
Organizers: Dirk Heylen, Alessandro Vinciarelli 

The Workshop "Roadmapping the Future of Multimodal Interaction Research, and Business Opportunities and Challenges invites papers
from researchers, people from industry, policy makers and other visionaries to identify the state- of-the art and the future of
research on multimodal interaction and the related fields such as affective computing and social signal processing. Besides the
attention to research questions and challenges (including reflection on the shortcomings of current methods and proposals for
innovation on this topic), the workshop focuses on societal challenges and business opportunities. What can be the impact of the
research, what needs to be done to achieve this. The workshop is also a place to discuss current best practices.

Workshop papers submission:  July 31st, 2014 (extended) 

2nd International Workshop on “Emotion representations and modelling in Human-Computer Interaction systems” (ERM4HCI 2014)
Organizers: Kim Hartmann, Bj÷rn Schuller, Ronald B÷ck, Klaus R. Scherer 

To develop user adaptable Human-Computer Interaction (HCI), the role of emotions occurring during interaction gained in attention
over the past years. Emotions, being widely accepted as essential to Human-Human interaction, became increasingly interesting for
system designers of affective interfaces in order to provide natural, user-centred interaction. However, to adequately incorporate
emotions in modern HCI systems, results from varying research disciplines must be combined. The 2nd ERM4HCI concentrates on emotion
representations, the characteristics used to describe and identify emotions and their relation to personality and user state models
(such as age, gender, physical/cognitive load, etc.). Researchers are encouraged to discuss possible interdependencies of
characteristics on an intra- and inter- modality level. Interdependencies of characteristics may occur if two characteristics are
influenced by the same physiological change in the observed user, but other factors (technical, constructive, etc.) can cause
interdependencies as well. The workshop aims at identifying a minimal set of characteristics to represent and recognise emotions in
multi-modal affective HCI. The workshop addresses some of the typical issues arising in multi-modal data processing for affective
systems, such as timing aspects, confidence metrics, discretisation issues and issues related to the translation between different
emotion models.

Workshop papers submission:  July 15th, 2014 

3rd workshop on Smart Material Interfaces: materials for smarter interfaces
Organizers: Andrea Minuto, Anton Nijholt, Fabio Pittarello, Anne Roudaut, Kasper Anders S°ren HornbŠk, Takuya Nojima 

Imagine a world where objects physically react to emotions, presence or user's actions and the environment shapes accordingly to our
needs. Nowadays we are seeing the beginning of such an interactive sphere (organic interfaces, reality-based interactions, etc.),
interfacing humans with smart environments. With this workshop we want to draw attention to the emerging field of smart material
interfaces which spans the areas of design, engineering and architecture. These novel composites, which in some cases are already
celebrated as the answer for the 21st. century's technological needs, are generally referred to as materials that are capable of
sensing the environment and actively responding to by changing their physical properties, such as shape, size and color. We aim at
stimulating research and development in interfaces that make novel use of smart materials, and will provide a platform for
state-of-the-art design of smart material interfaces. We invite original contributions in a variety of areas related to interaction
design and development of interfaces that makes use of SMART MATERIALS. The main topic of interest is the application of smart
materials in designing and building interfaces that communicate information to the user - or allow the user to manipulate
information - using different modalities provided by the material's properties. In order to establish a rich live demo session
throughout the conference we want to particularly encourage the submission of research that includes physical live demonstrators and
experimental prototypes.

Workshop papers submission:  July 15th, 2014 

7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality
Organizers: Hung-Hsuan Huang, Roman Bednarik, Kristiina Jokinen, Yukiko Nakano 

This is the seventh meeting in the series of workshops for Eye Gaze in Intelligent Human Machine Interaction. Previous workshops
have discussed a wide range of issues for eye gaze; technologies for sensing human attentional behaviors, roles of attentional
behaviors as social gaze in human-human and human-humanoid interaction, attentional behaviors in problem-solving and
task-performing, gaze-based intelligent user interfaces, evaluation of gaze-based UI, eye gaze in multimodal interpretation and
generation. In addition to these previous topics, this year’s workshop especially welcomes the contributions on real-world
applications and the integration of the knowledge from humanities fields and the technologies on mobile platforms where remark
progress has been achieved in recent years. This workshop aims to continue exploring this growing area of research by bringing
together researchers including human sensing, multimodal processing, humanoid interfaces, intelligent user interfaces, and
communication science.

Workshop papers submission:  July 22nd, 2014 (extended) 

ICMI 2014 Web Site http://icmi.acm.org/2014/

Received on Tuesday, 15 July 2014 21:16:31 UTC