<div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div><span class="gmail-m_-1343456875688542021gmail-m_6389319195800074415gmail-m_-8133923162122058344gmail-m_-1763716027949905101gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-m_408419128918831100gmail-m_1948246666769576665gmail-m_2358868591110292323gmail-m_1915704743853826484gmail-m_-2402332150115530072gmail-im">Apologies for cross-posting<br>******************************<wbr>*<br></span><br><span class="gmail-m_-1343456875688542021gmail-m_6389319195800074415gmail-m_-8133923162122058344gmail-m_-1763716027949905101gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-m_408419128918831100gmail-m_1948246666769576665gmail-m_2358868591110292323gmail-m_1915704743853826484gmail-m_-2402332150115530072gmail-im">CALL FOR PARTICIPANTS & PAPERS<br><br></span>CLIC: Workshop and Challenge on Learned Image Compression 2018
<br>in conjunction with <span class="gmail-m_-1343456875688542021gmail-il">CVPR</span> 2018, June 18, Salt Lake City, USA.<br>
<br>Website: <a href="http://www.compression.cc/">http://www.compression.cc/</a>
<br></div>
<br><br>Motivation
<br>
<br>The domain of image compression has traditionally used approaches
discussed in forums such as ICASSP, ICIP and other very specialized
venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and
challenge will be the first computer-vision event to explicitly focus on
these fields. Many techniques discussed at computer-vision meetings
have relevance for lossy compression. For example, super-resolution and
artifact removal can be viewed as special cases of the lossy compression
problem where the encoder is fixed and only the decoder is trained. But
also inpainting, colorization, optical flow, generative adversarial
networks and other probabilistic models have been used as part of lossy
compression pipelines. Lossy compression is therefore a potential topic
that can benefit a lot from a large portion of the <span class="gmail-m_-1343456875688542021gmail-il">CVPR</span> community.
<br>
<br>Recent advances in machine learning have led to an increased
interest in applying neural networks to the problem of compression. At
<span class="gmail-m_-1343456875688542021gmail-il">CVPR</span> 2017, for example, one of the oral presentations was discussing
compression using recurrent convolutional networks. In order to foster
more growth in this area, this workshop will not only try to encourage
more development but also establish baselines, educate, and propose a
common benchmark and protocol for evaluation. This is crucial, because
without a benchmark, a common way to compare methods, it will be very
difficult to measure progress.
<br>
<br>We propose hosting an image-compression challenge which specifically
targets methods which have been traditionally overlooked, with a focus
on neural networks (but also welcomes traditional approaches). Such
methods typically consist of an encoder subsystem, taking images and
producing representations which are more easily compressed than the
pixel representation (e.g., it could be a stack of convolutions,
producing an integer feature map), which is then followed by an
arithmetic coder. The arithmetic coder uses a probabilistic model of
integer codes in order to generate a compressed bit stream. The
compressed bit stream makes up the file to be stored or transmitted. In
order to decompress this bit stream, two additional steps are needed:
first, an arithmetic decoder, which has a shared probability model with
the encoder. This reconstructs (losslessly) the integers produced by the
encoder. The last step consists of another decoder producing a
reconstruction of the original image.
<br>
<br>In the computer vision community many authors will be familiar with a
multitude of configurations which can act as either the encoder and the
decoder, but probably few are familiar with the implementation of an
arithmetic coder/decoder. As part of our challenge, we therefore will
release a reference arithmetic coder/decoder in order to allow the
researchers to focus on the parts of the system for which they are
experts.
<br>While having a compression algorithm is an interesting feat by
itself, it does not mean much unless the results it produces compare
well against other similar algorithms and established baselines on
realistic benchmarks. In order to ensure realism, we have collected a
set of images which represent a much more realistic view of the types of
images which are widely available (unlike the well established
benchmarks which rely on the images from the Kodak PhotoCD, having a
resolution of 768x512, or Tecnick, which has images of around 1.44
megapixels). We will also provide the performance results from current
state-of-the-art compression systems as baselines, like WebP and BPG.
<br>
<br><br>Challenge Tasks
<br>
<br>We provide two datasets: Dataset P (“professional”) and Dataset M
(“mobile”). The datasets are collected to be representative for images
commonly used in the wild, containing thousands of images.
<br>
<br>The challenge will allow participants to train neural networks or
other methods on any amount of data (it should be possible to train on
the data we provide, but we expect participants to have access to
additional data, such as ImageNet).
<br>Participants will need to submit a decoder executable that can run
in the provided docker environment and be capable of decompressing the
submission files. We will impose reasonable limitations for compute and
memory of the decoder executable.
<br>
<br>We will rank participants (and baseline image compression methods –
WebP, JPEG 2000, and BPG) based on multiple criteria: (a) decoding
speed; (b) proxy perceptual metric (e.g., MS-SSIM Y); and (c) will
utilize scores provided by human raters. The overall winner will be
decided by a panel, whose goal is to determine the best compromise
between runtime performance and bitrate savings.
<br>
<br>
<br>
<br>Regular Paper Track
<br>
<br>We will have a short (4 pages) regular paper track, which allows
participants to share research ideas related to image compression. In
addition to the paper, we will host a poster session during which
authors will be able to discuss their work in more detail.
<br>We encourage exploratory research which shows promising results in:
<br>● Lossy image compression
<br>● Quantization (learning to quantize; dealing with quantization in optimization)
<br>● Entropy minimization
<br>● Image super-resolution for compression
<br>● Deblurring
<br>● Compression artifact removal
<br>● Inpainting (and compression by inpainting)
<br>● Generative adversarial networks
<br>● Perceptual metrics optimization and their applications to compression
<br>And in particular, how these topics can improve image compression.
<br>
<br>
<br>Challenge Paper Track
<br>
<br>The challenge task participants are asked to submit materials
detailing the algorithms which they submitted as part of the challenge.
Furthermore, they are invited to submit a paper detailing their approach
for the challenge.
<br>
<br>
<br>
Important Dates
<br><ul><li>December 22nd, 2017
Challenge announcement and the training part of the dataset released
January 15th, 2018
The validation part of the dataset released, online validation server is made available
</li><li>April 15th, 2018
The test set is released
</li><li>April 22nd, 2018
The competition closes and participants are expected to have submitted their decoder and compressed images
</li><li>April 26th, 2018
Deadline for paper submission
</li><li>May 29th, 2018
Release of paper reviews and challenge results
</li></ul><br>Forum<br><br>Please check out the discussion forum of the challenge for announcements and discussions related to the challenge:<strong><br></strong><a href="https://groups.google.com/foru">https://groups.google.com/foru</a><wbr>m/#!forum/clic-2018<strong><br></strong><br><br>Speakers<br>
<br></div><div style="margin-left:40px">Ramin Zabih (Google)
<br>Oren Rippel (WaveOne)
<br>Jim Bankoski (Google)
<br>Jens Ohm (RWTH Aachen)
<br></div><br>
<br>
Organizers<br>
<br><div style="margin-left:40px">William T. Freeman (MIT / Google)
<br>George Toderici (Google)
<br>Michele Covell (Google)
<br>Wenzhe Shi (Twitter)
<br>Radu Timofte (ETH Zurich)
<br>Lucas Theis (Twitter)
<br>Johannes Ballé (Google)
<br>Eirikur Agustsson (ETH Zurich)
<br>Nick Johnston (Google)<br></div><br><br><div>Sponsors<br><br></div><div style="margin-left:40px">Google<br></div><div style="margin-left:40px">Twitter<br></div><div style="margin-left:40px">Netflix<br></div><div style="margin-left:40px">Disney<br></div><div style="margin-left:40px">Amazon<br></div><br></div><div dir="ltr">Website: <a href="http://www.compression.cc/">http://www.compression.cc/</a></div></div></div>