[visionlist] CFP: Workshop on New Trends in Image Restoration and Enhancement @ CVPR 2017

Radu Timofte timofte.radu at gmail.com
Mon Mar 6 11:10:23 -05 2017


Apologies for multiple copies
**********************************

CALL FOR PAPERS:

NTIRE: New Trends in Image Restoration and Enhancement workshop and
challenge on image super-resolution 2017
In conjunction with CVPR 2017, July 21

Website: http://www.vision.ee.ethz.ch/ntire17/
Contact: radu.timofte at vision.ee.ethz.ch

SCOPE

Image restoration and image enhancement are key computer vision tasks,
aiming at the restoration of degraded image content or the filling in of
missing information. Recent years have witnessed an increased interest from
the vision and graphics communities in these fundamental topics of
research. Not only has there been a constantly growing flow of related
papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the
fulfillment of further tasks, with image restoration or enhancement serving
as an important frontend. Not surprisingly then, there is an ever growing
range of applications in fields such as surveillance, the automotive
industry, electronics, remote sensing, or medical image analysis. The
emergence and ubiquitous use of mobile and wearable devices offer another
fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in
those areas. Moreover, it will offer an opportunity for academic and
industrial attendees to interact and explore collaborations.

TOPICS

Papers addressing topics related to image restoration and enhancement are
invited. The topics include, but are not limited to:

● Image inpainting
● Image deblurring
● Image denoising
● Image upsampling and super-resolution
● Image filtering
● Image dehazing
● Demosaicing
● Image enhancement: brightening, color adjustment, sharpening, etc.
● Style transfer
● Image generation and image hallucination
● Image-quality assessment
● Video restoration and enhancement
● Hyperspectral imaging
● Methods robust to changing weather conditions
● Studies and applications of the above.

SUBMISSION

A paper submission has to be in English, in pdf format, and at most 8 pages
(excluding references) in CVPR style. The paper format must follow the same
guidelines as for all CVPR submissions.
http://cvpr2017.thecvf.com/submission/main_conference/author_guidelines
The review process is double blind. Authors do not know the names of the
chair/reviewers of their papers. Reviewers do not know the names of the
authors.
Dual submission is allowed with CVPR main conference only. If a paper is
submitted also to CVPR and accepted, the paper cannot be published both at
the CVPR and the workshop.

For the paper submissions, please go to the online submission site.
https://cmt3.research.microsoft.com/NTIRE2017

Accepted and presented papers will be published after the conference in the
CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and Computer
Vision Foundation (www.cv-foundation.org).

The author kit provides a LaTeX2e template for paper submissions. Please
refer to the example for detailed formatting instructions. If you use a
different document processing system then see the CVPR author instruction
page.

Author Kit: http://cvpr2017.thecvf.com/files/cvpr2017AuthorKit.zip

WORKSHOP DATES

● Submission Deadline: April 17, 2017
● Decisions: May 08, 2017
● Camera Ready Deadline: May 18, 2017


CHALLENGE on Example-based Single-Image Super-Resolution

In order to gauge the current state-of-the-art in example-based
single-image super-resolution, to compare and to promote different
solutions we are organizing an NTIRE challenge in conjunction with the CVPR
2017 conference. We propose a large DIV2K dataset with DIVerse 2K
resolution images.

The challenge has 2 tracks:
● Track 1: bicubic uses the bicubic downscaling (Matlab imresize), one of
the most common settings from the recent single-image super-resolution
literature.
● Track 2: unknown assumes that the explicit forms for the degradation
operators are unknown, only the training pairs of low and high images are
available.

To learn more about the challenge, to participate in the challenge, and to
access the newly collected DIV2K dataset with DIVerse 2K resolution images
everybody is invited to register at the links from:
http://www.vision.ee.ethz.ch/ntire17/

CHALLENGE DATES

● Release of train data: February 14, 2017
● Validation server online: February 25, 2017
● Competition ends: March 31, 2017


ORGANIZERS

● Radu Timofte, ETH Zurich, Switzerland (radu.timofte at vision.ee.ethz.ch)
● Ming-Hsuan Yang, University of California at Merced, US (
mhyang at ucmerced.edu)
● Eirikur Agustsson, ETH Zurich, Switzerland (
eirikur.agustsson at vision.ee.ethz.ch)
● Lei Zhang, The Hong Kong Polytechnic University (cslzhang at polyu.edu.hk)
● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland (
vangool at vision.ee.ethz.ch)


PROGRAM COMMITTEE

Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium
Michael S. Brown, York University, Canada
Subhasis Chaudhuri, IIT Bombay, India
Sunghyun Cho, Samsung
Oliver Cossairt, Northwestern University, US
Chao Dong, SenseTime
Weisheng Dong, Xidian University, China
Alessandro Foi, Tampere University of Technology, Finland
Luc Van Gool, ETH Zürich and KU Leuven, Belgium
Peter Gehler, University of Tübingen and MPI Intelligent Systems, Germany
Hiroto Honda, Toshiba Co.
Michal Irani, Weizmann Institute, Israel
Zhe Hu, Light.co
Sing Bing Kang, Microsoft Research, US
Kyoung Mu Lee, Seoul National University, South Korea
Chen Change Loy, Chinese University of Hong Kong
Vladimir Lukin, National Aerospace University, Ukraine
Kai-Kuang Ma, Nanyang Technological University, Singapore
Vasile Manta, Technical University of Iasi, Romania
Yasuyuki Matsushita, Osaka University, Japan
Peyman Milanfar, Google and UCSC, US
Yusuke Monno, Tokyo Institute of Technology, Japan
Hajime Nagahara, Kyushu University, Japan
Vinay P. Namboodiri, IIT Kanpur, India
Sebastian Nowozin, Microsoft Research Cambridge, UK
Aleksandra Pizurica, Ghent University, Belgium
Fatih Porikli, Australian National University, NICTA, Australia
Stefan Roth, TU Darmstadt, Germany
Aline Roumy, INRIA, France
Jordi Salvador, Technicolor, Germany
Nicu Sebe, University of Trento, Italy
Boxin Shi, National Institute of Advanced Industrial Science and Technology
(AIST), Japan
Sabine Süsstrunk, EPFL, Switzerland
Hugues Talbot, Université Paris Est, France
Yu-Wing Tai, SenseTime
Robby T. Tan, Yale-NUS College, Singapore
Masayuki Tanaka, Tokyo Institute of Technology, Japan
Radu Timofte, ETH Zürich, Switzerland
Chih-Yuan Yang, UC Merced, US
Ming-Hsuan Yang, University of California at Merced, US
Qingxiong Yang, Didi Chuxing, China
Lei Zhang, The Hong Kong Polytechnic University
Wangmeng Zuo, Harbin Institute of Technology, China


*** We are looking for sponsors for incentive prizes.  ***


Email: radu.timofte at vision.ee.ethz.ch
Website: http://www.vision.ee.ethz.ch/ntire17/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20170306/29eecc3e/attachment.html>


More information about the visionlist mailing list