<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
Apologize for multiple postings<br>
=====================================================================================================================<br>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr" style="text-align:center"><br>
<br>
Call for papers: <b>2nd Multimodal Learning and Applications
Workshop (MULA 2019)</b><br>
</div>
<div dir="ltr" style="text-align:center">June 16th, 2019
(Morning). In conjunction with <a
href="http://cvpr2019.thecvf.com/" moz-do-not-send="true">CVPR
2019</a><b><br>
</b></div>
<div style="text-align:center">Website: <a
href="https://mula-workshop.github.io/"
moz-do-not-send="true">https://mula-workshop.github.io/</a></div>
<div dir="ltr">
<div><br>
</div>
<div>This is an open call for papers, soliciting original
contributions considering recent findings in theory,
methodologies, and applications in the field of multimodal
machine learning.<br>
</div>
<div><br>
</div>
<div><b>Scope</b><br>
The exploitation of the power of big data in the last few
years led to a big step forward in many applications of
Computer Vision. However, most of the tasks tackled so far
are involving mainly visual modality due to the unbalanced
number of labelled samples available among modalities (e.g.,
there are many huge labelled datasets for images while not
as many for audio or IMU based classification), resulting in
a huge gap in performance when algorithms are trained
separately. <br>
</div>
<div><br>
</div>
<div><b>Topics</b></div>
<div> Potential topics include, but are not limited to:<br>
<ul>
<li>Multimodal learning</li>
<li>Cross-modal learning</li>
<li>Self-supervised learning for multimodal data</li>
<li>Multimodal data generation and sensors</li>
<li>Unsupervised learning on multimodal data</li>
<li>Cross-modal adaptation</li>
<li>Multimodal data fusion</li>
<li>Multimodal transfer learning</li>
<li>Multimodal applications (e.g. drone vision, autonomous
driving, industrial inspection, etc.)</li>
<li>Machine Learning studies of unusual modalities</li>
</ul>
<b>Submission</b><br>
Papers will be limited up to 8 pages according to the <a
href="http://cvpr2019.thecvf.com/submission/main_conference/author_guidelines"
moz-do-not-send="true">CVPR format</a> (c.f. main
conference authors guidelines). All papers will be reviewed
by at least two reviewers with double blind policy. Papers
will be selected based on relevance, significance and
novelty of results, technical merit, and clarity of
presentation. Papers will be published in CVPR 2019
proceedings.</div>
<div>All the papers should be submitted using <a
href="https://cmt3.research.microsoft.com/MULA2019"
moz-do-not-send="true">CMT website</a>.<br>
<br>
<b>Important Dates</b><br>
<u>Deadline for submission</u>: March 10th, 2019 - 23:59
Pacific Standard Time<br>
Notification of acceptance April 3rd, 2019<br>
Camera Ready submission deadline: April 10th, 2019<br>
Workshop date: June 16th, 2019 (Morning)</div>
<div><br>
</div>
<div><b>Organizers</b></div>
<div>Pietro Morerio, Istituto Italiano di Tecnologia, Italy<br>
Paolo Rota, Università di Trento, Italy<br>
Michael Ying Yang, University of Twente, Netherlands<br>
Bodo Rosenhahn, Institut für Informationsverarbeitung,
Leibniz-Universität Hannover, Germany<br>
Vittorio Murino, Istituto Italiano di Tecnologia, Italy<br>
==========================================================================================<br>
</div>
</div>
</div>
</div>
</body>
</html>