<div dir="ltr">
<div><div><span>Apologies for cross-posting<br></span><div><span>*******************************</span></div><div><span><br></span></div><div><span></span></div><div><div><span><span>CALL</span></span> FOR <span><span>PAPERS</span></span>  & <span><span>CALL</span></span> FOR PARTICIPANTS IN 8 <span class="gmail-il">CHALLENGES</span></div></div>


</div><div><br></div><div><span class="gmail-il">AIM</span>: 2nd <span class="gmail-il">Advances</span> in <span class="gmail-il">Image</span> <span class="gmail-il">Manipulation</span> 
<span class="gmail-il">workshop</span> and <span class="gmail-il">challenges</span> on real <span class="gmail-il">image</span> super-resolution, efficient SR, 
extreme SR, relighting, extreme inpainting, learned ISP, Bokeh effect, 
video temporal SR<br></div>In conjunction with <span class="gmail-il">ECCV</span> <span class="gmail-il">2020</span>, Glasgow, UK<br>
<br>Website: <a href="https://data.vision.ee.ethz.ch/cvl/aim20/" target="_blank">https://data.vision.ee.ethz.ch/cvl/aim20/</a> <br>Contact: <a href="mailto:radu.timofte@vision.ee.ethz.ch" target="_blank">radu.timofte@vision.ee.ethz.ch</a>
<br>
<br>
<br>SCOPE
<br>
<br>
<br><span class="gmail-il">Image</span> <span class="gmail-il">manipulation</span> is a key computer vision tasks, aiming at the 
restoration of degraded <span class="gmail-il">image</span> content, the filling in of missing 
information, or the needed transformation and/or <span class="gmail-il">manipulation</span> to achieve
 a desired target (with respect to perceptual quality, contents, or 
performance of apps working on such images). Recent years have witnessed
 an increased interest from the vision and graphics communities in these
 fundamental topics of research. Not only has there been a constantly 
growing flow of related papers, but also substantial progress has been 
achieved.
<br>
<br>Each step forward eases the use of images by people or computers for
 the fulfillment of further tasks, as <span class="gmail-il">image</span> <span class="gmail-il">manipulation</span> serves as an 
important frontend. Not surprisingly then, there is an ever growing 
range of applications in fields such as surveillance, the automotive 
industry, electronics, remote sensing, or medical <span class="gmail-il">image</span> analysis etc. 
The emergence and ubiquitous use of mobile and wearable devices offer 
another fertile ground for additional applications and faster methods.
<br>
<br>This <span class="gmail-il">workshop</span> aims to provide an overview of the new trends and 
<span class="gmail-il">advances</span> in those areas. Moreover, it will offer an opportunity for 
academic and industrial attendees to interact and explore 
collaborations.
<br>
<br>This <span class="gmail-il">workshop</span> builds upon the success of the <span class="gmail-il">Advances</span> in <span class="gmail-il">Image</span> 
<span class="gmail-il">Manipulation</span> (<span class="gmail-il">AIM</span>) <span class="gmail-il">workshop</span> at ICCV <span class="gmail-il">2020</span>, the Perceptual <span class="gmail-il">Image</span> 
Restoration and <span class="gmail-il">Manipulation</span> (PIRM) <span class="gmail-il">workshop</span> at <span class="gmail-il">ECCV</span> 2018, the <span class="gmail-il">workshop</span> 
and Challenge on Learned <span class="gmail-il">Image</span> Compression (CLIC) editions at CVPR 2018,
 2019 and <span class="gmail-il">2020</span> and the New Trends in <span class="gmail-il">Image</span> Restoration and Enhancement 
(NTIRE) editions: at CVPR 2017, 2018, 2019 and <span class="gmail-il">2020</span> and at ACCV 2016. 
Moreover, it relies on the people associated with the PIRM, CLIC, and 
NTIRE events such as organizers, PC members, distinguished speakers, 
authors of published papers, challenge participants and winning teams.
<br>
<br>
<br>TOPICS<br>
<br>Papers addressing topics related to <span class="gmail-il">image</span>/video <span class="gmail-il">manipulation</span>, 
restoration and enhancement are invited. The topics include, but are not
 limited to:
<br>
<br><div style="margin-left:40px">●    <span class="gmail-il">Image</span>-to-<span class="gmail-il">image</span> translation
<br>●    Video-to-video translation
<br>●    <span class="gmail-il">Image</span>/video <span class="gmail-il">manipulation</span>
<br>●    Perceptual <span class="gmail-il">manipulation</span>
<br>●    <span class="gmail-il">Image</span>/video generation and hallucination
<br>●    <span class="gmail-il">Image</span>/video quality assessment
<br>●    <span class="gmail-il">Image</span>/video semantic segmentation
<br>●    Perceptual enhancement
<br>●    Multimodal translation
<br>●    Depth estimation
<br>●    <span class="gmail-il">Image</span>/video inpainting
<br>●    <span class="gmail-il">Image</span>/video deblurring
<br>●    <span class="gmail-il">Image</span>/video denoising
<br>●    <span class="gmail-il">Image</span>/video upsampling and super-resolution
<br>●    <span class="gmail-il">Image</span>/video filtering
<br>●    <span class="gmail-il">Image</span>/video de-hazing, de-raining, de-snowing, etc.
<br>●    Demosaicing
<br>●    <span class="gmail-il">Image</span>/video compression
<br>●    Removal of artifacts, shadows, glare and reflections, etc.
<br>●    <span class="gmail-il">Image</span>/video enhancement: brightening, color adjustment, sharpening, etc.
<br>●    Style transfer
<br>●    Hyperspectral imaging
<br>●    Underwater imaging
<br>●    Aerial and satellite imaging
<br>●    Methods robust to changing weather conditions / adverse outdoor conditions
<br>●    <span class="gmail-il">Image</span>/video <span class="gmail-il">manipulation</span> on mobile devices
<br>●    <span class="gmail-il">Image</span>/video restoration and enhancement on mobile devices
<br>●    Studies and applications of the above.
<br></div>
<br>
<br>SUBMISSION<br>
<br>A paper submission has to be in English, in pdf format, and at most 
14 pages (excluding references) in <span class="gmail-il">ECCV</span> style. The paper format must 
follow the same guidelines as for all <span class="gmail-il">ECCV</span> submissions.
<br><a href="https://eccv2020.eu/author-instructions/" target="_blank">https://eccv2020.eu/author-instructions/</a>
<br>The review process is double blind. Authors do not know the names of
 the chair/reviewers of their papers. Reviewers do not know the names of
 the authors.
<br>Dual submission is allowed with <span class="gmail-il">ECCV</span> main conference only. If a 
paper is submitted also to <span class="gmail-il">ECCV</span> and accepted, the paper cannot be 
published both at the <span class="gmail-il">ECCV</span> and the <span class="gmail-il">workshop</span>.
<br>
<br>For the paper submissions, please go to the online submission site 
<br><a href="https://cmt3.research.microsoft.com/AIMWC2020" target="_blank">https://cmt3.research.microsoft.com/AIMWC2020</a>
<br>
<br>Accepted and presented papers will be published after the conference in the <span class="gmail-il">ECCV</span> Workshops Proceedings.
<br>
<br>The author kit provides a LaTeX2e template for paper submissions. 
Please refer to the example for detailed formatting instructions. If you
 use a different document processing system then see the <span class="gmail-il">ECCV</span> author 
instruction page.
<br>
<br>Author Kit:  <a href="https://eccv2020.eu/wp-content/uploads/2020/01/eccv2020kit-1.zip" target="_blank">https://eccv2020.eu/wp-content/uploads/<span class="gmail-il">2020</span>/01/eccv2020kit-1.zip</a>
<br>
<div><br></div><div><br></div><span class="gmail-il">WORKSHOP</span> DATES
<br>
<br><div style="margin-left:40px">● Submission Deadline: July 10, <span class="gmail-il">2020</span>
<br>● Decisions: July 20, <span class="gmail-il">2020</span>
<br>● Camera Ready Deadline: July 30, <span class="gmail-il">2020</span>
<br></div><div><br>
</div><div><br></div><div><span class="gmail-il">IMAGE</span> <span class="gmail-il">CHALLENGES</span> (<i>ongoing!</i>):</div><ol style="margin-left:40px"><li>     Bokeh effect simulation
(tracks: on smartphone GPU, on CPU)</li><li>Learned ISP (RAW to RGB mapping)
(tracks: fidelity, perceptual)</li><li>     Real super-resolution
(tracks: x2, x3, x4)</li><li>     Relighting (tracks: any to one, any to any relighting, illumination estimation)</li><li>     Efficient super-resolution</li><li>     Extreme inpainting
(tracks: classic, semantic guidance)</li></ol>
VIDEO <span class="gmail-il">CHALLENGES</span> (<i>ongoing!</i>):<br></div><div><div><ol style="margin-left:40px"><li>Video temporal super-resolution (frame interpolation)
</li><li>     Video extreme super-resolution
(tracks: fidelity, perceptual)</li></ol>PARTICIPATION
<br>
<br>To learn more about the <span class="gmail-il">challenges</span> and to participate:
<br><div style="margin-left:40px"><a href="https://data.vision.ee.ethz.ch/cvl/aim20/" target="_blank">https://data.vision.ee.ethz.ch/cvl/aim20/</a> <br></div></div><div><br></div><div>
<span class="gmail-il">CHALLENGES</span> DATES<br>
<br><div style="margin-left:40px">● Release of train data: May 05, <span class="gmail-il">2020</span>
<br>● Validation server online: May 15, <span class="gmail-il">2020</span>
<br>● Competitions end: July 10, <span class="gmail-il">2020</span>
<br></div>
<br>
<br>ORGANIZERS<br>
</div><ul><li>Radu Timofte, Andrey Ignatov, Kai Zhang, Dario Fuoli, Martin Danelljan, 
Zhiwu Huang, Andres Romero (ETH Zurich, Switzerland)</li><li>
Luc Van Gool (KU Leuven, Belgium and ETH Zurich, Switzerland)


</li><li>Wangmeng Zuo, 


Hannan Lu (Harbin Institute of Technology, China) 



</li><li>Shuhang Gu (University of Sydney, Australia)

</li><li>
Ming-Hsuan Yang (University of California at Merced and Google, US)
</li><li>

Majed El Helou,

Ruofan Zhou (EPFL, Switzerland)

</li><li>Kyoung Mu Lee, 
Seungjun Nah, 
Sanghyun Son, 


Jaerin Lee (Seoul National University, Korea)</li><li>Eli Shechtman (Adobe Research, US)</li><li>Evangelos Ntavelis, <span><span>Siavash Bigdeli (CSEM, Switzerland)<br></span></span></li><li>
Liang Lin, 

Weipeng Xu (Sun Yat-Sen University, China)

 





</li><li>Ming-Yu Liu (NVIDIA, US)</li><li>Roey Mechrez (BeyondMinds and Technion, Israel)</li></ul><div>
<br>SPEAKERS (TBA)
<br>
<br>
SPONSORS <br></div><div><br></div><div>(We are looking for sponsors! Please let us know if you are interested.)<br></div><div><br></div><div>CONTACT<br>
<br>Email: <a href="mailto:radu.timofte@vision.ee.ethz.ch" target="_blank">radu.timofte@vision.ee.ethz.ch</a>
<br>Website: <a href="https://data.vision.ee.ethz.ch/cvl/aim20/" target="_blank">https://data.vision.ee.ethz.ch/cvl/aim20/</a></div></div>

</div>