LMCE 2015 # Second International Workshop on Learning over Multiple Contexts @ ECML

[Apologies for cross-postings]


##################################################################
LMCE 2015 # Second International Workshop on Learning over Multiple 
Contexts @ ECML 2015

The value of model reuse

A workshop held in conjunction with the ECML PKDD 2015, Porto, Portugal, 
7-11 September 2015

http://www.dsic.upv.es/~flip/LMCE2015/
##################################################################


It is our pleasure to present this 2nd LMCE workshop, following up on 
the first workshop (http://users.dsic.upv.es/~flip/LMCE2014/) success 
with around 40 participants, 18 submissions, 13 regular papers and 3 
work-in-progress contributions.

As new this year,
- This second edition will focus on reusing models. Indeed when a change 
of context is observed during deployment, it is often hard to train a 
new model, while there exist models trained in different conditions.
- A challenge competition on real data (bike sharing) will be jointly 
organised with the workshop in order to give even more importance to 
model reuse over multiple contexts. We expect this framework will foster 
understanding, comparisons and discussions (see 
http://reframe-d2k.org/Challenge for further details on this competition).


=== Call for Papers ===

Adaptive reuse of learnt knowledge is of critical importance in the 
majority of knowledge-intensive application areas, particularly when the 
context in which the learnt model operates can be expected to vary from 
training to deployment. In machine learning this has been studied, for 
example, in relation to variations in class and cost skew in (binary) 
classification, leading to the development of tools such as ROC analysis 
to adjust decision thresholds to operating conditions concerning class 
and cost skew. More recently, considerable effort has been devoted to 
research on transfer learning, domain adaptation, and related approaches.

Given that the main business of predictive machine learning is to 
generalise from training to deployment, there is clearly scope for 
developing a general notion of operating context. Without such a notion, 
a model predicting sales in Prague for this week may perform poorly in 
Nancy for next Wednesday. The operating context has changed in terms of 
location as well as resolution. While a given predictive model may be 
sufficient and highly specialised for one particular operating context, 
it may not perform well in other contexts. If sufficient training data 
for the new context is available it might be feasible to retrain a new 
model; however, this is generally not a good use of resources, and one 
would expect it to be more cost-effective to learn one general, 
versatile model that effectively generalizes over multiple and possibly 
previously unseen contexts.

The aim of this workshop is to bring together people working in areas 
related to versatile models and model reuse over multiple contexts. 
Given the advances made in recent years on specific approaches such as 
transfer learning, an attempt to start developing an overarching theory 
is now feasible and timely, and can be expected to generate considerable 
interest from the machine learning community. Papers are solicited in 
all areas relating to model reuse and model generalisation including the 
following areas:

* Context-aware applications and recommender systems.
* Cost-sensitive Learning.
* Dataset shift, including concept drift and covariate shift.
* Domain adaptation.
* Employing background knowledge.
* Formats and tools for model exchange, transformation and reuse, such 
as PMML.
* Learning with different feature granularities or dimensions, 
quantification.
* Logical approaches to model reuse: abduction, theory revision, ILP, .
* Meta-Learning.
* Multi-task learning, co-learning.
* ROC analysis.
* Soft classifiers and Probability Estimators.
* Transductive Learning.
* Transfer Learning.
* Volatile information sources, adversarial learning.


=== Submission of Papers ===

We welcome submissions describing work in progress as well as more 
mature work related to learning over multiple contexts and model reuse. 
Submissions should be between 6 and 16 pages in the same format as the 
ECML-PKDD conference (LNAI). Authors of accepted papers will be asked to 
prepare a poster, and selected authors will be given the opportunity of 
a plenary presentation during the workshop.

Submission website: https://www.easychair.org/conferences/?conf=lmce2015

Papers will be selected by the program committee according to the 
quality of the submission and its relevance to the workshop topics. All 
accepted papers will be published on the workshop web site. The 
publication of a selected set of papers for a special volume or a 
journal issue is considered, but this will depend on the success and 
overall results of the workshop.


=== Important Dates ===

Submission (workshop papers): June 8, 2015
Notification of acceptance: July 6, 2015
Camera Ready copies: August 3, 2015
Workshop/Challenge dates: September 11, 2015


=== Program Committee ===

Chowdhury Farhan Ahmed, University of Strasbourg, France
Wouter Duivesteijn, Technische Universitat Dortmund, Germany
Cesar Ferri, Technical University of Valencia, Spain
Amaury Habrard, University of Saint-Etienne, France
José Hernandez-Orallo, Technical University of Valencia, Spain
Francisco Herrera, University of Granada, Spain
Sinno Jialin, Nanyang Technological University, Singapore
Antonio M. Lopez, Universitat Autonoma de Barcelona, Spain
Dragos Margineantu, Boeing Research and Technology, U.S.A.
Weike Pan, Shenzhen University, China.
Huimin Zhao, University of Wisconsin-Milwaukee, U.S.A.


=== Organising Committee ===

Nicolas Lachiche, University of Strasbourg, France
Meelis Kull, University of Bristol, UK
Adolfo Martínez-Usó, Universitat Politècnica de Valencia, Spain


For more information visit http://www.dsic.upv.es/~flip/LMCE2015/

Received on Wednesday, 8 April 2015 12:23:59 UTC