<div dir="ltr">
<span>Apologies for cross-posting<br></span><div><span>*******************************</span></div><div><span><br></span></div><div><span></span></div><div><div><span><span>CALL</span></span> FOR <span><span>PAPERS</span></span> & <span><span>CALL</span></span> FOR PARTICIPANTS IN 11 CHALLENGES</div><div><br></div><div>
<span><span>NTIRE</span></span>: 6<span>th <span>New Trends in Image Restoration and Enhancement workshop and challenges on <span><span><br></span></span></span></span></div><div><span><span><span><span>defocus, deblurring, super-resolution, learning SR space, nonhomogeneous dehazing,</span></span></span></span> image quality assessment, relighting, aerial image classification, enhancement of compressed videos, HDR<br><span>In</span> conjunction with CVPR <span><span><span>2021</span></span></span>, June 15, Nashville, USA (VIRTUAL).</div><div><br></div><div>
<div><div><div>Website: <a href="https://data.vision.ee.ethz.ch/cvl/ntire21/">https://data.vision.ee.ethz.ch/cvl/ntire21/</a><br>Contact: <a href="mailto:radu.timofte@vision.ee.ethz.ch" target="_blank">radu.timofte@vision.ee.ethz.ch</a></div><div><br></div><div>TOPICS</div><div><br></div><div style="margin-left:40px">
● Image/video inpainting
<br>● Image/video deblurring
<br>● Image/video denoising
<br>● Image/video upsampling and super-resolution
<br>● Image/video filtering
<br>● Image/video de-hazing, de-raining, de-snowing, etc.
<br>● Demosaicing
<br>● Image/video compression
<br>● Removal of artifacts, shadows, glare and reflections, etc.
<br>● Image/video enhancement: brightening, color adjustment, sharpening, etc.
<br>● Style transfer
<br>● Hyperspectral imaging
<br>● Underwater imaging
<br>● Methods robust to changing weather conditions / adverse outdoor conditions
<br>● Image/video restoration, enhancement, manipulation on constrained settings
<br>● Image/video processing on mobile devices
<br>● Visual domain translation
<br>● Multimodal translation
<br>● Perceptual enhancement
<br>● Perceptual manipulation <br></div><div style="margin-left:40px">
● Depth estimation
</div><div style="margin-left:40px">● Image/video generation and hallucination
<br>● Image/video quality assessment
<br>● Image/video semantic segmentation, depth estimation
<br>● Studies and applications of the above.
<br>
</div><div><br></div></div></div><span><span><span><span><span><span><span><span><span><span><span></span></span></span></span></span></span></span></span></span></span></span>
</div><div>SUBMISSION</div><div><br></div><div>
<div>A <span><span>paper</span></span> submission has to be in English, in pdf format, and at most 8
pages (excluding references) in CVPR style. <br></div>
<a href="http://cvpr2021.thecvf.com/node/33" target="_blank">http://cvpr2021.thecvf.com/node/33</a>
<div>The review process is double blind. <br>
</div><div>Accepted and presented <span><span>papers</span></span> will be published after the conference
in the CVPR 2021 Workshops Proceedings.
<br>
<br>Author Kit:
<a href="http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip" target="_blank">http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip</a>
</div><div>Submission site: <a href="https://cmt3.research.microsoft.com/NTIRE2021" target="_blank">https://cmt3.research.microsoft.com/NTIRE2021</a>
</div></div></div><div><br></div><div>WORKSHOP DATES</div><div><br></div><div>
<div><div style="margin-left:40px">
● <b>Regular <span>Papers</span> Submission Deadline: March 05, <span><span>2021</span></span></b><span><span> </span></span><b><span><span><br></span></span></b></div><div style="margin-left:40px"><span><span></span></span></div><div style="margin-left:40px">● Challenge <span>Papers</span> Submission Deadline: April 02, <span><span>2021</span></span></div><div style="margin-left:40px"><span><span><br></span></span></div>
<div><div>IMAGE CHALLENGES<br></div><ol><li><b>Defocus Deblurring using Dual-Pixel Images</b><br></li><li><b>Depth Guided Relighting </b>(one-to-one and any-to-any)<b><br></b></li><li><b>Perceptual Image Quality Assessment</b><b><br></b></li><li><b>Deblurring </b>(low resolution and JPEG artifacts)<b><br></b></li><li><b>Multi-model Aerial View Classification</b> (SAR and EO)<b><br></b></li><li>
<b>Learning the Super-Resolution Space<br></b></li><li><b>Nonhomogeneous Dehazing</b></li></ol><br><div>VIDEO / MULTI-FRAME CHALLENGES<br></div><ol><li><b>Enhancement of Compressed Videos </b>(fixed bit-rate and fixed QP)<b><br></b></li><li><b>Super-Resolution </b>(Spatial and Spatio-Temporal)</li><li><b>Burst Super-Resolution</b> (Real and Synthetic)<br></li><li>
<b>High Dynamic Range (HDR)
</b></li></ol><div><div>To learn more about the challenges, to participate <span>in</span> the challenges,
<span>and</span> to access the data everybody is invited to check the <span><span>NTIRE</span></span> 2021 web page:</div><div>
<a href="https://data.vision.ee.ethz.ch/cvl/ntire21/">https://data.vision.ee.ethz.ch/cvl/ntire21/</a>
<br><a href="http://www.vision.ee.ethz.ch/ntire21/" target="_blank"></a></div><div><br></div><div>For those interested in constrained and efficient solutions validated on mobile devices we refer to the CVPR21<b> Mobile AI Workshop and Challenges:</b></div><div><a href="https://ai-benchmark.com/workshops/mai/2021/" target="_blank">https://ai-benchmark.com/workshops/mai/2021/</a></div><div><br></div><div>
CHALLENGES DATES<br><div>
<br><div style="margin-left:40px">● Release of train data: January 10, <span><span><span>2020</span></span></span><br>● <b>Competitions end: March 20, <span><span><span>2020</span></span></span></b><span><span><span><span><span></span></span><span><span></span></span></span></span></span><b><span><span><span><b><span><span><br></span></span></b></span></span></span></b></div><div style="margin-left:40px"><b><span><span><span><br></span></span></span></b></div><b><span><span><span>
</span></span></span></b><span><span><span>ORGANIZERS<br>
<br></span></span></span><div style="margin-left:40px">● Radu Timofte, ETH Zurich <br>● Shuhang Gu, OPPO & University of Sydney<br>
● Kyoung Mu Lee, Seoul National University<br>● Michael S. Brown, York University
<br>● Andreas Lugmayr, ETH Zurich <br></div><div style="margin-left:40px">
● Goutam Bhat, ETH Zurich <br></div><div style="margin-left:40px">● Martin Danelljan, ETH Zurich <br>● Cosmin Ancuti, Université catholique de Louvain (UCL)<br>● Codruta O. Ancuti, University Politehnica Timisoara<br></div><div style="margin-left:40px">
● Lei Zhang, Alibaba & The Hong Kong Polytechnic University <br>● Ming-Hsuan Yang, University of California at Merced & Google <br></div><div style="margin-left:40px">● Eli Shechtman, Creative Intelligence Lab at Adobe Research<br>● Seungjun Nah, Seoul National University, Korea<br>● Abdullah Abuolaim, York University, Canada<br>● Eduardo Perez-Pellitero, Huawei Noah's Ark Lab, UK<br>● Ales Leonardis, Huawei Noah's Ark Lab & University of Birmingham<br>● Seungjun Nah, Seoul National University<br>● Sanghyun Son, Seoul National University<br>● Suyoung Lee, Seoul National University<br>● Ren Yang, ETH Zurich<br>● Ruofan Zhou, EPFL<br>● Majed El Helou, EPFL<br>● Sabine Süsstrunk, EPFL<br>● Chao Dong, SIAT<br>● Jimmy Ren, SenseTime<br>● Oliver Nina, AF Research Lab<br>● Bob Lee, Wright Brothers Institute<br>● Jinjin Gu, University of Sydney<br>● Luc Van Gool, KU Leuven and ETH Zurich<br><span><span><span></span></span></span></div><div style="margin-left:40px"><br><span><span><span></span></span></span><b><span><span><span></span></span></span></b></div>
</div><div>
SPEAKERS (TBA) <br></div><div><br></div><div>SPONSORS (TBA)</div></div><div><br></div><div><br><div>Website: <a href="https://data.vision.ee.ethz.ch/cvl/ntire21/">https://data.vision.ee.ethz.ch/cvl/ntire21/</a><br>Contact: <a href="mailto:radu.timofte@vision.ee.ethz.ch" target="_blank">radu.timofte@vision.ee.ethz.ch</a></div></div></div></div></div></div>
</div>