<div dir="ltr">
<div><span class="gmail-im"><div style="text-align:center"><b><font color="#000000">Call for participation & papers</font></b></div><div style="text-align:center"><b><font color="#000000">Chalearn Looking at People series</font></b></div><div style="text-align:center"><b><font color="#000000">Face Spoofing Attack Workshop and Challenge </font></b></div><div style="text-align:center"><font color="#000000"><b>CVPR Workshop 2019 </b> </font></div><div><br></div><div><font color="#000000">Face
anti-spoofing detection is an crucial procedure in biometric face
recognition systems. Previous competitions (i.e. ICB 2013 facial
spoofing attacks, IJCB 2011 facial spoofing attacks, IJCB 2017
competition ) focused on 2D face spoofing attacks, and most published
works focus on one single modality, such as rgb or depth face spoofing
detection. However, one single modality may not capture rich enough face
and environment information. Fortunately, according to the new
development of camera sensors, face image with multi modalities are
captured conveniently in a low cost. Therefore, We are organizing an
academic competition and workshop focus on multi-modal (RGB+depth+IR)
face anti-spoofing detection in videos. A new large-scale multimodal
face spoofing attack datasets is released and used in the competition
containing more than 2000 recorded subjects with multiple modalities. </font></div><div><font color="#000000"><br></font></div><div><font color="#000000">We
propose a challenge that aims at compiling the latest efforts and
research advances from the computational intelligence community in
creating fast and accurate face spoofing detection algorithms. The
methods will be evaluated on a large, newly collected and annotated
dataset.</font></div><div><font color="#000000"><br></font></div></span><div><font color="#000000">The <b>challenges </b>is running in the CodaLab platform (<b><a href="https://competitions.codalab.org/competitions/20853" target="_blank">https://competitions.codalab.org/competitions/20853</a></b>), and results will be presented at the <b>CVPR 2019 ChaLearn LAP associated workshop</b>.
Participants obtaining the best results will be invited to submit a
paper to associated workshop and extended versions to a dedicated
Special Issue in a top tier journal (TBA). There
will be prizes and travel grants from BAIDU. We will provide cash,
travel grants for top 3 winners (1st: 1000$+500$, 2nd:600$+500$, 3rd:
300$+500$). And the best workshop paper will be also rewarded in cash
(500$+500$). </font></div><div class="gmail-yj6qo gmail-ajU"><div id="gmail-:jl" class="gmail-ajR" tabindex="0"><img class="gmail-ajT gmail-CToWUd" src="https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif"></div></div><div class="gmail-adL"><div class="gmail-im"><div><font color="#000000"><br></font></div><div><font color="#000000">We
also solicit submissions in all aspects of facial biometric systems and
attacks. Most notably, the following are the main topics of interest:</font></div><div><ul><li style="margin-left:15px"><font color="#000000">Novel methodologies on anti-spoofing detection in visual information systems. <br></font></li><li style="margin-left:15px"><font color="#000000">Studies on novel attacks to biometric systems, and solutions <br></font></li><li style="margin-left:15px"><font color="#000000">Deep learning methods for biometric authentication systems using visual information<br></font></li><li style="margin-left:15px"><font color="#000000">Novel datasets and evaluation protocols on spoofing prevention on visual and multimodal biometric systems<br></font></li><li style="margin-left:15px"><font color="#000000">Methods for deception detection from visual and multimodal information<br></font></li><li style="margin-left:15px"><font color="#000000">Face antispoof attacks dataset (3D face Mask, multimodal).<br></font></li><li style="margin-left:15px"><font color="#000000">Deep analysis reviews on face antispoofing attacks.<br></font></li><li style="margin-left:15px"><font color="#000000">Generative models (e.g. GAN) for spoofing attacks.</font></li></ul></div><div><font color="#000000">We
are accepting submissions for revision up to 8 pages (same formatting
instructions as CVPR). CMT submissions will be reviewed by 3 members of
PC committee. </font></div><div><font color="#000000"><br></font></div><div><font color="#000000">******************************</font></div><div><font color="#000000">Important dates (tentative)</font></div><div><font color="#000000">******************************</font></div><div><font color="#000000"><br></font></div><div><font color="#000000">Important dates competition:</font></div><div><ul><li style="margin-left:15px"><font color="#000000">20th November, 2018: Beginning of the quantitative competition, release of development and validation data.</font></li><li style="margin-left:15px"><font color="#000000">10th
March, 2019: Release of encrypted final evaluation data. Participants
can start training their methods with the whole data set.</font></li><li style="margin-left:15px"><font color="#000000">15th March, 2019: Deadline for code submission.</font></li><li style="margin-left:15px"><font color="#000000">16th
March, 2019: Release of final evaluation data decryption key.
Participants start predicting the results on the final evaluation data.</font></li><li style="margin-left:15px"><font color="#000000">20th
March, 2019: End of the quantitative competition. Deadline for
submitting the predictions over the final evaluation data. The
organizers start the code verification by running it on the final
evaluation data.</font></li><li style="margin-left:15px"><font color="#000000">22nd March, 2019: Deadline for submitting the fact sheets.</font></li><li style="margin-left:15px"><font color="#000000">25th
March, 2019: Release of the verification results to the participants
for review. Participants are invited to follow the paper submission
guide for submitting contest papers.</font></li><li style="margin-left:15px"><font color="#000000">30th March, 2017: Paper submission deadline for submitting their CVPRW paper (following workshop dates).</font></li></ul></div><div><font color="#000000">Important dates workshop:</font></div><div><ul><li style="margin-left:15px"><font color="#000000">30th March, 2019: Paper submission deadline</font></li><li style="margin-left:15px"><font color="#000000">5th April, 2019: Paper notification</font></li><li style="margin-left:15px"><font color="#000000">9th April, 2019: Camera ready</font></li></ul></div><div><font color="#000000">***********************</font></div><div><font color="#000000">Organizing team</font></div><div><font color="#000000">Jun Wan, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences</font></div><div><font color="#000000">Sergio Escalera, Computer Vision Center (UAB) and University of Barcelona</font></div><div><font color="#000000">Hugo Jair Escalante, INAOE, ChaLearn, Mexico</font></div><div><font color="#000000">Isabelle Guyon, Université Paris-Saclay, France, ChaLearn, Berkeley, California, USA</font></div><div><font color="#000000">Guodong Guo, IDL, Baidu Research</font></div><div><font color="#000000">Hailin Shi, JD AI Research</font></div><div><font color="#000000">Meysam Madadi, Universitat Autonoma de Barcelona & Compter Vision Center</font></div><div><font color="#000000">Shaopeng Tang, Beijing SurfingTech Co., Ltd, </font></div><div><font color="#000000">***********************</font></div></div></div></div>
<br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div><font size="1"><span style="color:rgb(68,68,68)"><b><span style="color:rgb(102,102,102)">Dr. Sergio Escalera Guerrero</span><br></b></span><span style="color:rgb(153,153,153)">Head of Human Pose Recovery and Behavior Analysis group /</span></font><span style="color:rgb(153,153,153)"><font size="1"> Project Manager at the Computer Vision Center</font></span><br><span style="color:rgb(153,153,153)"></span></div></div><span style="color:rgb(153,153,153)"><font size="1">Vice-president of ChaLearn Challenges in Machine Learning, Berkeley<br></font></span><span style="color:rgb(153,153,153)"><font size="1"><span><span><span style="color:rgb(153,153,153)"><font size="1">Associate professor</font></span></span></span> at Universitat de Barcelona / Universitat Oberta de Catalunya / Aalborg University / </font></span><br><span style="color:rgb(153,153,153)"><font size="1">Dalhousie University<br></font></span><font size="1"><span style="color:rgb(153,153,153)">Email: <a href="mailto:sergio.escalera.guerrero@gmail.com" target="_blank">sergio.escalera.guerrero@gmail.com</a> / Webpage: <a href="http://www.maia.ub.es/~sergio/" target="_blank">http://www.sergioescalera.com/</a></span></font><span><span style="color:rgb(153,153,153)"><font size="1"> / Phone:+34</font></span><font size="1"><span style="color:rgb(153,153,153)"><font size="1"><span dir="ltr"><span dir="ltr"><span><span>934020853<br></span></span></span></span></font></span></font></span><div><span></span></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>