<div dir="ltr">
<p style="text-align:center"><span style="font-size:20px"><strong>Chalearn Satellite Workshop </strong></span><strong><span style="font-size:20px">on Image and Video Inpainting </span> </strong><span style="font-size:20px"><strong>@ECCV18</strong></span></p>
<p>------------------------------<wbr>-------------</p>
<p dir="ltr"><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">Call for Participation: ChaLearn Looking at People Inpainting and Denoising in the Deep Learning Age events:</span></p>
<p dir="ltr"><strong>Challenge and ECCV 2018 Satellite Event - Registration FREE</strong></p>
<p dir="ltr"><strong>Associated Springer book chapter publication and IEEE TPAMI Special Issue</strong></p>
<p dir="ltr" style="text-align:justify"><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">Sponsoring: prizes from Google, Disney Research, Amazon, and ChaLearn</span><br>
<span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">Sep. 9th 2018, Munich, </span><a href="https://www.hi-hotel-muenchen.de/en/munich-conference-hotel/" style="text-decoration:none" target="_blank"><u>https://www.hi-hotel-muenchen.<wbr>de/en/munich-conference-hotel/</u></a><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt"><wbr>, 130m from main ECCV venue.</span><br>
<span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">Competition webpage: </span><a href="http://chalearnlap.cvc.uab.es/challenge/26/description/" style="text-decoration:none" target="_blank"><u>http://chalearnlap.cvc.uab.es/<wbr>challenge/26/description/</u></a></p>
<p dir="ltr" style="text-align:justify"><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">ECCV Satellite event webpage: </span><a href="http://chalearnlap.cvc.uab.es/workshop/29/description/" style="text-decoration:none" target="_blank"><u>http://chalearnlap.cvc.uab.es/<wbr>workshop/29/description/</u></a></p>
<p dir="ltr" style="text-align:justify"><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">IEEE TPAMI Special Issue webpage: </span><a href="http://chalearnlap.cvc.uab.es/special-issue/30/description/" style="text-decoration:none" target="_blank"><u>http://chalearnlap.cvc.uab.es/<wbr>special-issue/30/description/</u></a><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt"> </span></p>
<p dir="ltr" style="text-align:justify"><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">Contact: <a href="mailto:sergio.escalera.guerrero@gmail.com" target="_blank">sergio.escalera.guerrero@<wbr>gmail.com</a> </span></p>
<p><span style="background-color:transparent;color:rgb(0,0,0);font-family:arial;font-size:11pt">******************************<wbr>******************************<wbr>************</span></p>
<p><strong><u>Aims and scope</u>: </strong> The problem of dealing with
missing data or incomplete data in machine learning arises in many
applications. Recent strategies make use of generative models to impute
missing or corrupted data. Advances in computer vision using deep
generative models have found applications in image/video processing,
such as denoising [1], restoration [2], super-resolution [3], or
inpainting [4,5]. We focus on image and video inpainting tasks, that
might benefit from novel methods such as Generative Adversarial Networks
(GANs) [6,7] or Residual connections [8,9]. Solutions to the inpainting
problem may be useful in a wide variety of computer vision tasks. We
chose three examples: <strong>human pose estimation</strong> and <strong>video de-captioning </strong>and <strong>fingerprint denoising</strong>.</p>
<p><strong>1- Human pose estimation</strong>: it is challenging to
perform human pose recognition in images containing occlusions; since
tackling human pose recognition is a prerequisite for human behaviour
analysis in many applications, replacing occluded parts may help the
whole processing chain.</p>
<p><strong>2- video de-captioning</strong>: in the context of news media
and video entertainment, broadcasting programs from various languages,
such as news, series or documentaries, there are frequently text
captions or encrusted commercials or subtitles, which reduce visual
attention and occlude parts of frames that may decrease the performance
of automatic understanding systems. Despite recent advances in machine
learning, it is still challenging to aim at fast (real time) and
accurate automatic text removal in video sequences.</p>
<p><strong>3- fingerprint denoising</strong>: biometrics play an
increasingly important role in security to ensure privacy and identity
verification, as evidenced by the increasing prevalence of fingerprint
sensors on mobile devices. Fingerprint retrieval keeps also being an
important law enforcement tool used in forensics. However, much remains
to be done to improve the accuracy of verification, both in terms of
false negatives (in part due to poor image quality when fingers are wet
or dirty) and in terms of false positives due to the ease of forgery.</p>
<p>As one of the important branches in image and video analysis of
humans (named Looking at People), understanding and inpainting occluded
parts have become a research area of great interest as it has many
potential applications domains including human behavior analysis,
augmented reality and biometry recognition. We propose a <strong>satellite </strong>workshop
on image and video inpainting. This session aims at compiling the
latest efforts and research advances from the scientific community in
enhancing traditional computer vision and pattern recognition algorithms
with human image inpainting, video decaptioning and fingerprint
denoising at both the learning and prediction stages.</p>
<p dir="ltr"><strong><u>Workshop topics and guidelines</u>: </strong>The
scope of the workshop comprises all aspects of image and video
inpainting and denoising. Including but not limited to the following
topics:</p>
<ul>
<li>2D/3D human pose recovery under occlusion,</li>
<li>human inpainting,</li>
<li>human retexturing,</li>
<li>video decaptioning,</li>
<li>temporal occlusion recovery,</li>
<li>object recognition under occlusion,</li>
<li>fingerprint recognition,</li>
<li>fingerprint denoising,</li>
<li>future frame video prediction,</li>
<li>unsupervised learning for missing data recovery and/or denoising,</li>
<li>new data and applications of inpainting and/or denoising.</li>
</ul>
<p> </p>
<p>Abstract submissions for presentation in the workshop can be done through CMT web page: <a href="https://cmt3.research.microsoft.com/INPAINTING2018/" target="_blank">https://cmt3.research.<wbr>microsoft.com/INPAINTING2018/</a>.<wbr> The abstract papers must have maximum 4 pages length plus references. Authors have to use <a href="https://www.springer.com/gp/authors-editors/book-authors-editors/manuscript-preparation/5636" target="_blank">this template</a>. Contributions will be published within a volume in this series: <a href="http://www.springer.com/series/15602" target="_blank">http://www.springer.com/<wbr>series/15602</a>. Accepted
papers will present their results in the satellite workshop and
extended versions will be published within CIML volume. We organize a <a href="http://chalearnlap.cvc.uab.es/special-issue/30/description/" target="_blank">TPAMI Special Issue</a> on the topic and extended versions of best satellite event papers will be invited to contribute.</p>
<p>The workshop is a <strong>FREE-REGISTRATION EVENT</strong>, open to everyone, and take place at <strong>Holiday Inn Munich – City Centre, </strong>Hochstrasse 3, 81669 München, Germany, at just 130m of main ECCV venue. You can check the place in google map <a href="https://goo.gl/maps/QC89aCyNiQT2" target="_blank">here</a>.</p>
<p dir="ltr"><strong>References:</strong></p>
<p>[1] V. Jain and S. Seung, “Natural image denoising with
convolutional networks,” in Advances in Neural Information Processing
Systems, 2009, pp. 769–776.<br>
[2] L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural
network for image deconvolution,” in Advances in Neural Information
Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N.
D. Lawrence, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2014,
pp. 1790–1798.<br>
[3] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution
using deep convolutional networks,” IEEE transactions on pattern
analysis and machine intelligence, vol. 38, no. 2, pp. 295–307, 2016.<br>
[4] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with
deep neural networks,” in Advances in Neural Information Processing
Systems, 2012, pp. 341–349.</p>
<p>[5] A. Newson, A. Almansa, M. Fradet, Y. Gousseau, and P. P´erez,
“Video inpainting of complex scenes,” SIAM Journal on Imaging Sciences,
vol. 7, no. 4, pp. 1993–2019, 2014.<br>
[6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in
Advances in neural information processing systems, 2014, pp. 2672–2680.<br>
[7] D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. Efros,
“Context encoders: Feature learning by inpainting,” in Computer Vision
and Pattern Recognition (CVPR), 2016.<br>
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
image recognition,” in The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), June 2016.<br>
[9] X.-J. Mao, C. Shen, and Y.-B. Yang, “Image Restoration Using
Convolutional Auto-encoders with Symmetric Skip Connections,” ArXiv
e-prints, Jun. 2016.</p><div class="gmail-yj6qo gmail-ajU"><div id="gmail-:a0f" class="gmail-ajR" tabindex="0"><img class="gmail-ajT" src="https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif"></div></div>
<br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div><font size="1"><span style="color:rgb(68,68,68)"><b><span style="color:rgb(102,102,102)">Dr. Sergio Escalera Guerrero</span><br></b></span><span style="color:rgb(153,153,153)">Head of Human Pose Recovery and Behavior Analysis Lab<br></span></font></div><span style="color:rgb(153,153,153)"><font size="1">Project Manager at the Computer Vision Center<br></font></span></div><span style="color:rgb(153,153,153)"><font size="1">Director of ChaLearn Challenges in Machine Learning<br></font></span><span style="color:rgb(153,153,153)"><font size="1"><span><span><span style="color:rgb(153,153,153)"><font size="1">Associate professor</font></span></span></span> at University of Barcelona / Universitat Oberta de Catalunya / Aalborg Univ. / </font></span><br><span style="color:rgb(153,153,153)"><font size="1">Dalhousie University<br></font></span><font size="1"><span style="color:rgb(153,153,153)">Email: <a href="mailto:sergio.escalera.guerrero@gmail.com" target="_blank">sergio.escalera.guerrero@gmail.com</a> / Webpage: <a href="http://www.maia.ub.es/~sergio/" target="_blank">http://www.sergioescalera.com/</a></span></font><br><div><span></span></div></div></div></div></div></div></div></div></div></div></div>
</div>