<div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div><span class="gmail-m_-1750993150304217790gmail-m_6389319195800074415gmail-m_-8133923162122058344gmail-m_-1763716027949905101gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-m_408419128918831100gmail-m_1948246666769576665gmail-m_2358868591110292323gmail-m_1915704743853826484gmail-m_-2402332150115530072gmail-im">Apologies for cross-posting<br>*******************************<br></span><br><span class="gmail-m_-1750993150304217790gmail-m_6389319195800074415gmail-m_-8133923162122058344gmail-m_-1763716027949905101gmail-m_6936151042752791574gmail-m_5552105694281904806gmail-m_408419128918831100gmail-m_1948246666769576665gmail-m_2358868591110292323gmail-m_1915704743853826484gmail-m_-2402332150115530072gmail-im">CALL FOR PARTICIPANTS & PAPERS<br><br></span>CLIC: Workshop and Challenge on Learned Image Compression 2019
<br>in conjunction with <span class="gmail-m_-1750993150304217790gmail-il">CVPR</span> 2019, June 16, Long Beach, USA.<br>
<br>Website: <a href="http://www.compression.cc/" target="_blank">http://www.compression.cc/</a>
<br></div>
<br>
<br>Motivation
<br>
<br>The domain of image compression has traditionally used approaches
discussed in forums such as ICASSP, ICIP and other very specialized
venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and
challenge will be the first computer-vision event to explicitly focus on
these fields. Many techniques discussed at computer-vision meetings
have relevance for lossy compression. For example, super-resolution and
artifact removal can be viewed as special cases of the lossy compression
problem where the encoder is fixed and only the decoder is trained. But
also inpainting, colorization, optical flow, generative adversarial
networks and other probabilistic models have been used as part of lossy
compression pipelines. Lossy compression is therefore a potential topic
that can benefit a lot from a large portion of the CVPR community.
<br>
<br>Recent advances in machine learning have led to an increased
interest in applying neural networks to the problem of compression. At
CVPR 2017, for example, one of the oral presentations was discussing
compression using recurrent convolutional networks. In order to foster
more growth in this area, this workshop will not only try to encourage
more development but also establish baselines, educate, and propose a
common benchmark and protocol for evaluation. This is crucial, because
without a benchmark, a common way to compare methods, it will be very
difficult to measure progress.
<br>
<br>We propose hosting an image-compression challenge which specifically
targets methods which have been traditionally overlooked, with a focus
on neural networks (but also welcomes traditional approaches). Such
methods typically consist of an encoder subsystem, taking images and
producing representations which are more easily compressed than the
pixel representation (e.g., it could be a stack of convolutions,
producing an integer feature map), which is then followed by an
arithmetic coder. The arithmetic coder uses a probabilistic model of
integer codes in order to generate a compressed bit stream. The
compressed bit stream makes up the file to be stored or transmitted. In
order to decompress this bit stream, two additional steps are needed:
first, an arithmetic decoder, which has a shared probability model with
the encoder. This reconstructs (losslessly) the integers produced by the
encoder. The last step consists of another decoder producing a
reconstruction of the original image.
<br>
<br>In the computer vision community many authors will be familiar with a
multitude of configurations which can act as either the encoder and the
decoder, but probably few are familiar with the implementation of an
arithmetic coder/decoder. As part of our challenge, we therefore will
release a reference arithmetic coder/decoder in order to allow the
researchers to focus on the parts of the system for which they are
experts.
<br>
<br>While having a compression algorithm is an interesting feat by
itself, it does not mean much unless the results it produces compare
well against other similar algorithms and established baselines on
realistic benchmarks. In order to ensure realism, we have collected a
set of images which represent a much more realistic view of the types of
images which are widely available (unlike the well established
benchmarks which rely on the images from the Kodak PhotoCD, having a
resolution of 768x512, or Tecnick, which has images of around 1.44
megapixels). We will also provide the performance results from current
state-of-the-art compression systems as baselines, like WebP and BPG.
<br>
<br>Challenge Tasks
<br>
<br>We will be running two tracks on the the challenge: low-rate
compression, to judged on the quality, and “transparent” compression, to
be judged by the bit rate. For the low-rate compression track, there
will be a bitrate threshold that must be met. For the transparent track,
there will be several quality thresholds that must be met. In all
cases, the submissions will be judged based on the aggregate results
across the test set: the test set will be treated as if it were a single
‘target’, instead of (for example) evaluating bpp or PSNR on each image
separately.
<br>
<br>For the low-rate compression track, the requirement will be that the
compression is to less than 0.15 bpp across the full test set. The
maximum size of the sum of all files will be released with the test set.
In addition, a decoder executable has to be submitted that can run in
the provided Docker environment and is capable of decompressing the
submitted files. We will impose reasonable limitations for compute and
memory of the decoder executable. The submissions in this track that are
at or below that bitrate threshold will then be evaluated for best
PSNR, best MS-SSIM, and best MOS from human raters.
<br>
<br>For the transparent compression track, the requirement will be that
the compression quality is at least 40 dB (aggregated) PSNR; at least
0.993 (aggregated) MS-SSIM; and a reasonable quality level using the
Butteraugli measure (final values will be announced later). The
submissions in this track that are at or better than these quality
thresholds will then be evaluated for lowest total bitrate.
<br>
<br>We provide the same two training datasets as we did last year:
Dataset P (“professional”) and Dataset M (“mobile”). The datasets are
collected to be representative for images commonly used in the wild,
containing around two thousand images. The challenge will allow
participants to train neural networks or other methods on any amount of
data (it should be possible to train on the data we provide, but we
expect participants to have access to additional data, such as
ImageNet).
<br>
<br>Participants will need to submit a file for each test image.
<br>
<br>Prizes will given to the winners of the challenges. This is possible thanks to the sponsors.
<br>
<br>To ensure that the decoder is not optimized for the test set, we
will require the teams to use one of the decoders submitted in the
validation phase of the challenge.
<br>
<br>
<br>
<br>Regular Paper Track
<br>
<br>We will have a short (4 pages) regular paper track, which allows
participants to share research ideas related to image compression. In
addition to the paper, we will host a poster session during which
authors will be able to discuss their work in more detail.
<br>We encourage exploratory research which shows promising results in:
<br>● Lossy image compression
<br>● Quantization (learning to quantize; dealing with quantization in optimization)
<br>● Entropy minimization
<br>● Image super-resolution for compression
<br>● Deblurring
<br>● Compression artifact removal
<br>● Inpainting (and compression by inpainting)
<br>● Generative adversarial networks
<br>● Perceptual metrics optimization and their applications to compression
<br>And in particular, how these topics can improve image compression.
<br>
<br>
<br>Challenge Paper Track
<br>
<br>The challenge task participants are asked to submit a short paper
(up to 4 pages) detailing the algorithms which they submitted as part of
the challenge.
<br>
<br>
<br>Important Dates
<br>
<br>All deadlines are 23:59:59 PST.
<br>
<br><ul><li>December 17th, 2018 Challenge announcement and the training part of the dataset released
</li><li>January 8th, 2019 The validation part of the dataset released, online validation server is made available.
</li><li>March 15th, 2019 The test set is released.
</li><li>March 22th, 2019 The competition closes and participants are
expected to have submitted their solutions along with the compressed
versions of the test set.
</li><li>April 8th, 2019 Deadline for paper submission and factsheets.
</li><li>April 15th, 2019 Results are released to the participants.
</li><li>April 22rd, 2019 Paper decision notification
</li><li>April 30th, 2019 Camera ready deadline
</li></ul>
<br>
<br>Speakers (TBD):
<br>
<br><div style="margin-left:40px">Anne Aaron (Netflix)
<br>Aaron Van Den Oord (Deepmind)
<br>Jyrki Alakuijala (Google)
<br></div>
<br>
<br>Organizers:
<br>
<br><div style="margin-left:40px">George Toderici (Google)
<br>Michele Covell (Google)
<br>Wenzhe Shi (Twitter)
<br>Radu Timofte (ETH Zurich)
<br>Lucas Theis (Twitter)
<br>Johannes Ballé (Google)
<br>Eirikur Agustsson (ETH Zurich)
<br>Nick Johnston (Google)
<br>Fabian Mentzer (ETH Zurich)
<br></div>
<br>Sponsors:
<br>
<br><div style="margin-left:40px"> Google
<br> Twitter
<br> Nvidia
<br> Huawei
<br> Amazon
<br> Netflix
<br></div>
</div></div><div dir="ltr"><br>Website: <a href="http://www.compression.cc/" target="_blank">http://www.compression.cc/</a></div></div></div>