<div dir="ltr"><div><span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-m_408419128918831100gmail-m_1948246666769576665gmail-m_2358868591110292323gmail-m_1915704743853826484gmail-m_-2402332150115530072gmail-im">Apologies for cross-posting<br>******************************<wbr>*</span><br><br>CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 3 CHALLENGES<br></div><div><br>NTIRE: <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">3rd New</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">Trends</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">in</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">Image</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">Restoration</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">and</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">Enhancement</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">workshop</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">and</span> challenges <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">on</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">image</span> super-resolution, dehazing, and spectral reconstruction <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il"></span><br><span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">In</span> conjunction with <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">CVPR</span> <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">2018</span>, June 18, Salt Lake City, USA.<br>
<br>Website: <a href="http://www.vision.ee.ethz.ch/n">http://www.vision.ee.ethz.ch/n</a><wbr>tire18/
<br>Contact: <a href="mailto:radu.timofte@vision.ee.ethz.ch">radu.timofte@vision.ee.ethz.ch</a>
<br><br><br>SCOPE<br>
<br>Image restoration and image enhancement are key computer vision tasks,
aiming at the restoration of degraded image content or the filling in of
missing information. Recent years have witnessed an increased interest
from the vision and graphics communities in these fundamental topics of
research. Not only has there been a constantly growing flow of related
papers, but also substantial progress has been achieved.
<br>
<br>Each step forward eases the use of images by people or computers for
the fulfillment of further tasks, with image restoration or enhancement
serving as an important frontend. Not surprisingly then, there is an
ever growing range of applications in fields such as surveillance, the
automotive industry, electronics, remote sensing, or medical image
analysis. The emergence and ubiquitous use of mobile and wearable
devices offer another fertile ground for additional applications and
faster methods.
<br>
<br>This workshop aims to provide an overview of the new trends and
advances in those areas. Moreover, it will offer an opportunity for
academic and industrial attendees to interact and explore
collaborations.
<br><br><br>TOPICS<br>
<br>Papers addressing topics related to image/video restoration and
enhancement are invited. The topics include, but are not limited to:
<br>
<br>● Image/video inpainting
<br>● Image/video deblurring
<br>● Image/video denoising
<br>● Image/video upsampling and super-resolution
<br>● Image/video filtering
<br>● Image/video dehazing
<br>● Demosaicing
<br>● Image/video compression
<br>● Artifact removal
<br>● Image/video enhancement: brightening, color adjustment, sharpening, etc.
<br>● Style transfer
<br>● Image/video generation and image hallucination
<br>● Image/video quality assessment
<br>● Hyperspectral imaging
<br>● Underwater imaging
<br>● Aerial and satellite imaging
<br>● Methods robust to changing weather conditions / adverse outdoor conditions
<br>● Studies and applications of the above.
<br>
<br><br>SUBMISSION<br>
<br>A paper submission has to be in English, in pdf format, and at most 8
pages (excluding references) in CVPR style. The paper format must
follow the same guidelines as for all CVPR submissions.
<br><a href="http://cvpr2018.thecvf.com/sub">http://cvpr2018.thecvf.com/sub</a><wbr>mission/main_conference/author<wbr>_guidelines
<br>The review process is double blind. Authors do not know the names of
the chair/reviewers of their papers. Reviewers do not know the names of
the authors.
<br>Dual submission is allowed with CVPR main conference only. If a
paper is submitted also to CVPR and accepted, the paper cannot be
published both at the CVPR and the workshop.
<br>
<br>For the paper submissions, please go to the online submission site (opens February 1, 2018).
<br>
<br>Accepted and presented papers will be published after the conference
in the CVPR Workshops Proceedings on by IEEE (<a href="http://www.ieee.org">http://www.ieee.org</a>) and
Computer Vision Foundation (<a href="http://www.cv-foundation.org">www.cv-foundation.org</a>).
<br>
<br>The author kit provides a LaTeX2e template for paper submissions.
Please refer to the example for detailed formatting instructions. If you
use a different document processing system then see the CVPR author
instruction page.
<br>
<br>Author Kit: <a href="http://cvpr2018.thecvf.com/fil">http://cvpr2018.thecvf.com/fil</a><wbr>es/cvpr2018AuthorKit.zip
<br><br><span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il"><br>WORKSHOP</span> DATES<br><br>
● Submission Deadline: March 01, 2018
<br>● Decisions: March 29, 2018
<br>● Camera Ready Deadline: April 05, 2018
<span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il"></span><br>
<br>
<br><br>CHALLENGE <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">on</span> SUPER-RESOLUTION (started!)<br>
<p>
The challenge has 4 tracks as follows:
</p><ol><li> <span style="color:rgb(0,0,0)"><strong>Track 1: classic bicubic</strong>
</span> uses the bicubic downscaling (Matlab imresize), the most common setting
from the recent single-image super-resolution literature.</li><li> <strong>Track 2: realistic mild adverse conditions </strong>
assumes that the degradation operators (emulating the image acquisition
process from a digital camera) are the
same within an image space and for all the images.</li><li> <strong>Track 3: realistic difficult adverse conditions </strong>
assumes that the degradation operators (emulating the image acquisition
process from a digital camera) are the
same within an image space and for all the images.</li><li> <strong>Track 4: realistic wild conditions</strong>
assumes that the degradation operators (emulating the image acquisition
process from a digital camera) are the same within an
image space but DIFFERENT from one image to another. This setting is
the closest to real "wild" conditions.</li></ol></div><div><br>CHALLENGE <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">on</span> IMAGE DEHAZING (started!)<br><br><em><strong> A novel datasets of real hazy images obtained
in outdoor and indoor environments with ground truth is introduced with the challenge. It
is the first image dehazing online challenge.</strong></em></div><div><ol><li> <strong> Track 1: Indoor</strong> - the goal is to restore the visibility in images with haze generated
in a controlled indoor environment. </li><li> <strong> Track 2: Outdoor</strong> - the goal is to restore the visibility in outdoor images with haze
generated using a professional haze/fog generator.</li></ol>For data and more details: <br><a href="http://www.vision.ee.ethz.ch/n">http://www.vision.ee.ethz.ch/n</a><wbr>tire18/</div><div><br></div><div><br>CHALLENGE <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">on</span> SPECTRAL RECONSTRUCTION (started!)<br>
<em><strong><br>The largest dataset to date will be introduced with the challenge.
It is the first spectral reconstruction from RGB images online challenge.
</strong></em>
<ol><li> <strong>Track 1: Clean </strong> recovering hyperspectral data
from uncompressed 8-bit RGB images created by applying a know response
function to ground truth hyperspectral information.
</li><li> <strong>Track 2: Real World </strong> recovering hyperspectral
data from jpg-compressed 8-bit RGB images created by applying an unknown
response function to ground truth hyperspectral information.
</li></ol></div>To learn more about the challenges, to participate <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">in</span> the challenges,
<span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">and</span> to access the data everybody is invited to check the NTIRE webpage:<br><a href="http://www.vision.ee.ethz.ch/n">http://www.vision.ee.ethz.ch/n</a><wbr>tire18/<br><br><br>CHALLENGES DATES<br><div>
<br>● Release of train data: January 10, <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">2018</span><br>● <b>Competition ends: March 01, <span class="gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-il">2018</span></b><br>
<br><br>ORGANIZERS<br>
<br>● Radu Timofte, ETH Zurich, Switzerland (radu.timofte [at] <a href="http://vision.ee.ethz.ch">vision.ee.ethz.ch</a>) <br>● Ming-Hsuan Yang, University of California at Merced, US (mhyang [at] <a href="http://ucmerced.edu">ucmerced.edu</a>)<br>● Jiqing Wu, ETH Zurich, Switzerland (Jiqing.wu [at] <a href="http://vision.ee.ethz.ch">vision.ee.ethz.ch</a>) <br>● Lei Zhang, The Hong Kong Polytechnic University (cslzhang [at] <a href="http://polyu.edu.hk">polyu.edu.hk</a>) <br>● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland (vangool [at] <a href="http://vision.ee.ethz.ch">vision.ee.ethz.ch</a>) <br>● Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium<br>● Codruta O. Ancuti, University Politehnica Timisoara, Romania<br>● Boaz Arad, Ben-Gurion University, Israel <br>● Ohad Ben-Shahar, Ben-Gurion University, Israel<br><br>
<br></div><div>PROGRAM COMMITTEE (to be updated)<br><br></div><div> Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium<br> Nick Barnes, Data61, Australia <br> Michael S. Brown, York University, Canada<br> Subhasis Chaudhuri, IIT Bombay, India<br> Sunghyun Cho, Samsung<br> Oliver Cossairt, Northwestern University, US<br> Chao Dong, SenseTime<br> Weisheng Dong, Xidian University, China<br> Alexey Dosovitskiy, Intel Labs<br> Touradj Ebrahimi, EPFL, Switzerland<br> Michael Elad, Technion, Israel<br> Corneliu Florea, University Politehnica of Bucharest, Romania<br> Alessandro Foi, Tampere University of Technology, Finland<br> Bastian Goldluecke, University of Konstanz, Germany<br> Luc Van Gool, ETH Zürich and KU Leuven, Belgium<br> Peter Gehler, University of Tübingen and MPI Intelligent Systems, Germany<br> Hiroto Honda, DeNA Co., Japan<br> Michal Irani, Weizmann Institute, Israel<br> Phillip Isola, UC Berkeley, US<br> Zhe Hu, Light.co<br> Sing Bing Kang, Microsoft Research, US<br> Vivek Kwatra, Google<br> Christian Ledig, Twitter, UK<br> Kyoung Mu Lee, Seoul National University, South Korea<br> Seungyong Lee, POSTECH, South Korea<br> Stephen Lin, Microsoft Research Asia<br> Chen Change Loy, Chinese University of Hong Kong<br> Vladimir Lukin, National Aerospace University, Ukraine<br> Kai-Kuang Ma, Nanyang Technological University, Singapore<br> Vasile Manta, Technical University of Iasi, Romania<br> Yasuyuki Matsushita, Osaka University, Japan<br> Peyman Milanfar, Google and UCSC, US<br> Rafael Molina Soriano, University of Granada, Spain<br> Yusuke Monno, Tokyo Institute of Technology, Japan<br> Hajime Nagahara, Kyushu University, Japan<br> Vinay P. Namboodiri, IIT Kanpur, India<br> Sebastian Nowozin, Microsoft Research Cambridge, UK<br> Federico Perazzi, Disney Research<br> Aleksandra Pizurica, Ghent University, Belgium<br> Fatih Porikli, Australian National University, NICTA, Australia<br> Hayder Radha, Michigan State University, US<br> Antonio Robles-Kelly, CSIRO, Australia<br> Stefan Roth, TU Darmstadt, Germany<br> Aline Roumy, INRIA, France<br> Jordi Salvador, Amazon, US<br> Yoichi Sato, University of Tokyo, Japan<br> Eli Shechtman, Adobe Research, US <br> Samuel Schulter, NEC Labs America<br> Nicu Sebe, University of Trento, Italy<br> Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan<br> Wenzhe Shi, Twitter Inc.<br> Alexander Sorkine-Hornung, Disney Research<br> Sabine Süsstrunk, EPFL, Switzerland<br> Yu-Wing Tai, Tencent Youtu<br> Hugues Talbot, Université Paris Est, France<br> Robby T. Tan, Yale-NUS College, Singapore<br> Masayuki Tanaka, Tokyo Institute of Technology, Japan<br> Jean-Philippe Tarel, IFSTTAR, France<br> Radu Timofte, ETH Zürich, Switzerland<br> George Toderici, Google, US<br> Ashok Veeraraghavan, Rice University, US<br> Jue Wang, Megvii Research, US<br> Chih-Yuan Yang, UC Merced, US<br> Ming-Hsuan Yang, University of California at Merced, US<br> Qingxiong Yang, Didi Chuxing, China<br> Jason Yosinski, Uber AI Labs, US<br> Lei Zhang, The Hong Kong Polytechnic University<br> Wangmeng Zuo, Harbin Institute of Technology, China<br><br></div>
<p>SPEAKERS (to be announced)<br></p><div><br>SPONSORS (to be updated)<br></div><div><br><div style="margin-left:40px">NVIDIA<br></div></div><div style="margin-left:40px">SenseTime<br></div><div style="margin-left:40px">Google<br></div><div style="margin-left:40px">Disney Research<br></div><div style="margin-left:40px">OpenOcean<br></div><br>
Contact: <a href="mailto:radu.timofte@vision.ee.ethz.ch">radu.timofte@vision.ee.ethz.ch</a>
<br>Website: <a href="http://www.vision.ee.ethz.ch/n">http://www.vision.ee.ethz.ch/n</a><wbr>tire18/</div>