<div dir="ltr">
<span>Apologies for multiple postings<br></span><div><span>***********************************</span></div><div><span><br></span></div><div><span></span></div><div><div><span><span>CALL</span></span> FOR <span><span>PAPERS</span></span>  & <span><span>CALL</span></span> FOR PARTICIPANTS IN 16 CHALLENGES</div><div><br></div><div>
<span><span>NTIRE</span></span>: 9<span>th <span>New Trends in Image Restoration and Enhancement workshop and challenges.</span></span><br><span>In</span> conjunction with CVPR <span><span><span>2024</span></span></span>, June 17, Seattle, US.</div><div><br></div><div>
<div><div><div>Website: <a href="https://cvlai.net/ntire/2024/" target="_blank">https://cvlai.net/ntire/2024/</a></div><div>Contact: <a href="mailto:radu.timofte@uni-wuerzburg.de" target="_blank">radu.timofte@uni-wuerzburg.de</a></div><div><br></div><div>TOPICS</div><div><br></div><div style="margin-left:40px">
●    Image/video inpainting
<br>●    Image/video deblurring
<br>●    Image/video denoising
<br>●    Image/video upsampling and super-resolution
<br>●    Image/video filtering
<br>●    Image/video de-hazing, de-raining, de-snowing, etc.
<br>●    Demosaicing
<br>●    Image/video compression
<br>●    Removal of artifacts, shadows, glare and reflections, etc.
<br>●    Image/video enhancement: brightening, color adjustment, sharpening, etc.
<br>●    Style transfer
<br>●    
Hyperspectral image restoration, enhancement, manipulation

<br>●    Underwater image restoration, enhancement, manipulation</div><div style="margin-left:40px">●    Light field image restoration, enhancement, manipulation

</div><div style="margin-left:40px">●    Methods robust to changing weather conditions / adverse outdoor conditions
<br>●    Image/video restoration, enhancement, manipulation on constrained settings/mobile devices
<br>●    Visual domain translation
<br>●    Multimodal translation
<br>●    Perceptual enhancement
<br>●    Perceptual manipulation <br></div><div style="margin-left:40px">
● Depth estimation 

</div><div style="margin-left:40px">●    Image/video generation and hallucination
<br>●    Image/video quality assessment
<br>●    Image/video semantic segmentation</div><div style="margin-left:40px">
●    Saliency and gaze estimation <br></div><div style="margin-left:40px">●    Studies and applications of the above.
<br>


</div><div><br></div></div></div><span><span><span><span><span><span><span><span><span><span><span></span></span></span></span></span></span></span></span></span></span></span>

</div><div>SUBMISSION</div><div><br></div><div>
<div>A <span><span>paper</span></span> submission has to be in English, in pdf format, and at most 8
 pages (excluding references) in CVPR style. <br></div><div>

<a href="https://cvpr.thecvf.com/Conferences/2024/AuthorGuidelines">https://cvpr.thecvf.com/Conferences/2024/AuthorGuidelines</a>

</div><div>The review process is double blind. <br>
</div><div>Accepted and presented <span><span>papers</span></span> will be published after the conference
 in the 2024 CVPR Workshops Proceedings.
<br>
<br>Author Kit: 
<a href="https://github.com/cvpr-org/author-kit/archive/refs/tags/CVPR2024-v2.zip" target="_blank">https://github.com/cvpr-org/author-kit/archive/refs/tags/CVPR2024-v2.zip</a>

</div><div>Submission site: <a href="https://cmt3.research.microsoft.com/NTIRE2024" target="_blank">https://cmt3.research.microsoft.com/NTIRE2024</a></div><div><br></div></div></div><br><div>WORKSHOP DATES</div><div><br></div><div>
<div><div style="margin-left:40px">
● <b>Regular <span>Papers</span> submission deadline: March 10, <span><span>2024</span></span></b><span><span> </span></span><b><span><span><br></span></span></b></div><div style="margin-left:40px"><span><span></span></span></div><div style="margin-left:40px">● Challenge <span>Papers</span> submission deadline: April 1, <span><span>2024</span></span></div><div style="margin-left:40px"><span><span>
● <b>Papers reviewed elsewhere submission deadline: April 1, <span><span>2024</span></span></b><span><span> </span></span><b><span><span><br></span></span></b>

</span></span></div><div style="margin-left:40px"><span><span><br></span></span></div>
<div><div>CHALLENGES<br></div><div>
<ul style="list-style-type:disc"><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17529" target="_blank">Dense and Non-Homogeneous Dehazing</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17529" target="_blank">Night Photography Rendering</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17548" target="_blank">Blind Compressed Image Enhancement</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17539" target="_blank">Shadow Removal - Track 1 Fidelity</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17546" target="_blank">Shadow Removal - Track 2 Perceptual</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17547" target="_blank">Efficient Super Resolution</a><b> <br></b></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17553" target="_blank">Image Super Resolution (x4)</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17265" target="_blank">Light Field Image Super-Resolution - Track 1 Fidelity</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17266" target="_blank">Light Field Image Super-Resolution - Track 2 Efficiency</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17245" target="_blank">Stereo Image Super-Resolution - Track 1 Bicubic</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17246" target="_blank">Stereo Image Super-Resolution - Track 2 Realistic</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17515" target="_blank">HR Depth from Images of Specular and Transparent Surfaces - Track 1 Stereo</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17516" target="_blank">HR Depth from Images of Specular and Transparent Surfaces - Track 2 Mono</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17573" target="_blank">Bracketing Image Restoration and Enhancement - Track 1</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17574" target="_blank">Bracketing Image Restoration and Enhancement - Track 2</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17311" target="_blank">Portrait Quality Assessment</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17627" target="_blank">Quality Assessment for AI-Generated Content - Track 1 Image</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17621" target="_blank">Quality Assessment for AI-Generated Content - Track 2 Video</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17632" target="_blank">Restore Any Image Model (RAIM) in the Wild</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17631" target="_blank">RAW Image Super-Resolution</a></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17638" target="_blank">Short-form UGC Video Quality Assessment</a><b> <br></b></font></h6></li><li><h6><font size="2"><a href="https://codalab.lisn.upsaclay.fr/competitions/17640" target="_blank">Low Light Enhancement</a></font></h6></li></ul>

</div>To learn more about the challenges, to participate <span>in</span> the challenges, 
<span>and</span> to access the data everybody is invited to check the <span><span>NTIRE</span></span> 2024 page: <br></div><div><div><a href="https://cvlai.net/ntire/2024/" target="_blank">https://cvlai.net/ntire/2024/</a></div><div><br><div><br></div><div>
CHALLENGES DATES<br><div>
<br><div style="margin-left:40px">● <b>Release of train data: February 1, <span><span><span>2024</span></span></span></b><br>● Competitions end: March 14, <span><span><span>2024</span></span></span><b><span><span><span><b><span><span><br></span></span></b></span></span></span></b></div><div style="margin-left:40px"><b><span><span><span><br></span></span></span></b></div><b><span><span><span></span></span></span></b><span><span><span></span></span></span><span><span><span></span></span></span><b><span><span><span></span></span></span></b>
</div><div>
SPEAKERS (TBA) <br></div><div><br></div><div>SPONSORS (TBA)</div></div><div><br></div><div><br><div>
<div>Website: <a href="https://cvlai.net/ntire/2024/" target="_blank">https://cvlai.net/ntire/2024/</a></div><div>Contact: <a href="mailto:radu.timofte@uni-wuerzburg.de" target="_blank">radu.timofte@uni-wuerzburg.de</a></div></div></div></div></div></div></div>

</div>