[visionlist] NTIRE workshop & challenges on super-resolution, dehazing, spectral reconstruction @ CVPR 2018

Radu Timofte timofte.radu at gmail.com
Tue Feb 13 12:08:49 -05 2018


Apologies for cross-posting
*******************************

CALL FOR PAPERS  & CALL FOR PARTICIPANTS IN 3 CHALLENGES

NTIRE: 3rd New Trends in Image Restoration and Enhancement workshop and
challenges on image super-resolution, dehazing, and spectral reconstruction
in conjunction with CVPR 2018, June 18, Salt Lake City, USA.

Website: http://www.vision.ee.ethz.ch/ntire18/
Contact: radu.timofte at vision.ee.ethz.ch


SCOPE

Image restoration and image enhancement are key computer vision tasks,
aiming at the restoration of degraded image content or the filling in of
missing information. Recent years have witnessed an increased interest from
the vision and graphics communities in these fundamental topics of
research. Not only has there been a constantly growing flow of related
papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the
fulfillment of further tasks, with image restoration or enhancement serving
as an important frontend. Not surprisingly then, there is an ever growing
range of applications in fields such as surveillance, the automotive
industry, electronics, remote sensing, or medical image analysis. The
emergence and ubiquitous use of mobile and wearable devices offer another
fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in
those areas. Moreover, it will offer an opportunity for academic and
industrial attendees to interact and explore collaborations.


TOPICS

Papers addressing topics related to image/video restoration and enhancement
are invited. The topics include, but are not limited to:

● Image/video inpainting
● Image/video deblurring
● Image/video denoising
● Image/video upsampling and super-resolution
● Image/video filtering
● Image/video dehazing
● Demosaicing
● Image/video compression
● Artifact removal
● Image/video enhancement: brightening, color adjustment, sharpening, etc.
● Style transfer
● Image/video generation and image hallucination
● Image/video quality assessment
● Hyperspectral imaging
● Underwater imaging
● Aerial and satellite imaging
● Methods robust to changing weather conditions / adverse outdoor
conditions
● Studies and applications of the above.


SUBMISSION

A paper submission has to be in English, in pdf format, and at most 8 pages
(excluding references) in CVPR style. The paper format must follow the same
guidelines as for all CVPR submissions.
http://cvpr2018.thecvf.com/submission/main_conference/author_guidelines
The review process is double blind. Authors do not know the names of the
chair/reviewers of their papers. Reviewers do not know the names of the
authors.
Dual submission is allowed with CVPR main conference only. If a paper is
submitted also to CVPR and accepted, the paper cannot be published both at
the CVPR and the workshop.

For the paper submissions, please go to the online submission site
https://cmt3.research.microsoft.com/NTIRE2018

Accepted and presented papers will be published after the conference in the
CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and Computer
Vision Foundation (www.cv-foundation.org).

The author kit provides a LaTeX2e template for paper submissions. Please
refer to the example for detailed formatting instructions. If you use a
different document processing system then see the CVPR author instruction
page.

Author Kit: http://cvpr2018.thecvf.com/files/cvpr2018AuthorKit.zip


WORKSHOP DATES

● Submission Deadline: March 01, 2018
● Decisions: March 29, 2018
● Camera Ready Deadline: April 05, 2018



CHALLENGE on SUPER-RESOLUTION (ongoing!)

The challenge has 4 tracks as follows:

   1. *Track 1: classic bicubic* uses the bicubic downscaling (Matlab
   imresize), the most common setting from the recent single-image
   super-resolution literature.
   2. *Track 2: realistic mild adverse conditions * assumes that the
   degradation operators (emulating the image acquisition process from a
   digital camera) are the same within an image space and for all the images.
   3. *Track 3: realistic difficult adverse conditions * assumes that the
   degradation operators (emulating the image acquisition process from a
   digital camera) are the same within an image space and for all the images.
   4. *Track 4: realistic wild conditions* assumes that the degradation
   operators (emulating the image acquisition process from a digital camera)
   are the same within an image space but DIFFERENT from one image to another.
   This setting is the closest to real "wild" conditions.


CHALLENGE on IMAGE DEHAZING (ongoing!)

* A novel datasets of real hazy images obtained in outdoor and indoor
environments with ground truth is introduced with the challenge. It is the
first image dehazing online challenge.*

   1. * Track 1: Indoor* - the goal is to restore the visibility in images
   with haze generated in a controlled indoor environment.
   2. * Track 2: Outdoor* - the goal is to restore the visibility in
   outdoor images with haze generated using a professional haze/fog generator.


CHALLENGE on SPECTRAL RECONSTRUCTION (ongoing!)

*The largest dataset to date will be introduced with the challenge. It is
the first spectral reconstruction from RGB images online challenge. *

   1. *Track 1: Clean * recovering hyperspectral data from uncompressed
   8-bit RGB images created by applying a know response function to ground
   truth hyperspectral information.
   2. *Track 2: Real World * recovering hyperspectral data from
   jpg-compressed 8-bit RGB images created by applying an unknown response
   function to ground truth hyperspectral information.

To learn more about the challenges, to participate in the challenges, and
to access the data everybody is invited to check the NTIRE webpage:
http://www.vision.ee.ethz.ch/ntire18/


CHALLENGES DATES

● Release of train data: January 10, 2018
● *Competition ends: March 08, 2018 (extended!)*


ORGANIZERS

● Radu Timofte, ETH Zurich, Switzerland
● Ming-Hsuan Yang, University of California at Merced, US
● Jiqing Wu, ETH Zurich, Switzerland
● Lei Zhang, The Hong Kong Polytechnic University
● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland
● Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium
● Codruta O. Ancuti, University Politehnica Timisoara, Romania
● Boaz Arad, Ben-Gurion University, Israel
● Ohad Ben-Shahar, Ben-Gurion University, IsraelFor data and more details:


PROGRAM COMMITTEE (to be updated)

    Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium
    Nick Barnes, Data61, Australia
    Michael S. Brown, York University, Canada
    Subhasis Chaudhuri, IIT Bombay, India
    Sunghyun Cho, Samsung
    Oliver Cossairt, Northwestern University, US
    Chao Dong, SenseTime
    Weisheng Dong, Xidian University, China
    Alexey Dosovitskiy, Intel Labs
    Touradj Ebrahimi, EPFL, Switzerland
    Michael Elad, Technion, Israel
    Corneliu Florea, University Politehnica of Bucharest, Romania
    Alessandro Foi, Tampere University of Technology, Finland
    Peter Gehler, University of Tübingen, MPI Intelligent Systems, Amazon,
Germany
    Bastian Goldluecke, University of Konstanz, Germany
    Luc Van Gool, ETH Zürich and KU Leuven, Belgium
    Shuhang Gu, ETH Zürich, Switzerland
    Michael Hirsch, Amazon
    Hiroto Honda, DeNA Co., Japan
    Jia-Bin Huang, Virginia Tech, US
    Michal Irani, Weizmann Institute, Israel
    Phillip Isola, UC Berkeley, US
    Zhe Hu, Light.co
    Sing Bing Kang, Microsoft Research, US
    Jan Kautz, NVIDIA Research, US
    Seon Joo Kim, Yonsei University, Korea
    Vivek Kwatra, Google
    Christian Ledig, Twitter Inc.
    Kyoung Mu Lee, Seoul National University, South Korea
    Seungyong Lee, POSTECH, South Korea
    Stephen Lin, Microsoft Research Asia
    Chen Change Loy, Chinese University of Hong Kong
    Vladimir Lukin, National Aerospace University, Ukraine
    Kai-Kuang Ma, Nanyang Technological University, Singapore
    Vasile Manta, Technical University of Iasi, Romania
    Yasuyuki Matsushita, Osaka University, Japan
    Peyman Milanfar, Google and UCSC, US
    Rafael Molina Soriano, University of Granada, Spain
    Yusuke Monno, Tokyo Institute of Technology, Japan
    Hajime Nagahara, Osaka University, Japan
    Vinay P. Namboodiri, IIT Kanpur, India
    Sebastian Nowozin, Microsoft Research Cambridge, UK
    Federico Perazzi, Disney Research
    Aleksandra Pizurica, Ghent University, Belgium
    Sylvain Paris, Adobe
    Fatih Porikli, Australian National University, NICTA, Australia
    Hayder Radha, Michigan State University, US
    Tobias Ritschel, University College London, UK
    Antonio Robles-Kelly, CSIRO, Australia
    Stefan Roth, TU Darmstadt, Germany
    Aline Roumy, INRIA, France
    Jordi Salvador, Amazon, US
    Yoichi Sato, University of Tokyo, Japan
    Konrad Schindler, ETH Zurich, Switzerland
    Samuel Schulter, NEC Labs America
    Nicu Sebe, University of Trento, Italy
    Eli Shechtman, Adobe Research, US
    Boxin Shi, National Institute of Advanced Industrial Science and
Technology (AIST), Japan
    Wenzhe Shi, Twitter Inc.
    Alexander Sorkine-Hornung, Disney Research
    Sabine Süsstrunk, EPFL, Switzerland
    Yu-Wing Tai, Tencent Youtu
    Hugues Talbot, Université Paris Est, France
    Robby T. Tan, Yale-NUS College, Singapore
    Masayuki Tanaka, Tokyo Institute of Technology, Japan
    Jean-Philippe Tarel, IFSTTAR, France
    Radu Timofte, ETH Zürich, Switzerland
    George Toderici, Google, US
    Ashok Veeraraghavan, Rice University, US
    Jue Wang, Megvii Research, US
    Chih-Yuan Yang, UC Merced, US
    Jianchao Yang, Snapchat
    Ming-Hsuan Yang, University of California at Merced, US
    Qingxiong Yang, Didi Chuxing, China
    Jong Chul Ye, KAIST, Korea
    Jason Yosinski, Uber AI Labs, US
    Wenjun Zeng, Microsoft Research
    Lei Zhang, The Hong Kong Polytechnic University
    Wangmeng Zuo, Harbin Institute of Technology, China


SPEAKERS (to be announced)

SPONSORS (to be updated)

    Alibaba
    NVIDIA
    SenseTime
    OpenOcean
    Google
    Disney Research

Contact: radu.timofte at vision.ee.ethz.ch
Website: http://www.vision.ee.ethz.ch/ntire18/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20180213/dd62562b/attachment-0001.html>


More information about the visionlist mailing list