[visionlist] [CFP] CLIC: Workshop and Challenge on Learned Image Compression @ CVPR 2019
timofte.radu at gmail.com
Thu Feb 28 10:58:33 -04 2019
Apologies for cross-posting
CALL FOR PARTICIPANTS & PAPERS
CLIC: Workshop and Challenge on Learned Image Compression 2019
in conjunction with CVPR 2019, June 17, Long Beach, USA.
The domain of image compression has traditionally used approaches discussed
in forums such as ICASSP, ICIP and other very specialized venues like PCS,
DCC, and ITU/MPEG expert groups. This workshop and challenge will be the
first computer-vision event to explicitly focus on these fields. Many
techniques discussed at computer-vision meetings have relevance for lossy
compression. For example, super-resolution and artifact removal can be
viewed as special cases of the lossy compression problem where the encoder
is fixed and only the decoder is trained. But also inpainting,
colorization, optical flow, generative adversarial networks and other
probabilistic models have been used as part of lossy compression pipelines.
Lossy compression is therefore a potential topic that can benefit a lot
from a large portion of the CVPR community.
We will be running two tracks on the the challenge: low-rate compression,
to judged on the quality, and “transparent” compression, to be judged by
the bit rate. For the low-rate compression track, there will be a bitrate
threshold that must be met. For the transparent track, there will be
several quality thresholds that must be met. In all cases, the submissions
will be judged based on the aggregate results across the test set: the test
set will be treated as if it were a single ‘target’, instead of (for
example) evaluating bpp or PSNR on each image separately.
For the low-rate compression track, the requirement will be that the
compression is to less than 0.15 bpp across the full test set. The maximum
size of the sum of all files will be released with the test set. In
addition, a decoder executable has to be submitted that can run in the
provided Docker environment and is capable of decompressing the submitted
files. We will impose reasonable limitations for compute and memory of the
decoder executable. The submissions in this track that are at or below that
bitrate threshold will then be evaluated for best PSNR, best MS-SSIM, and
best MOS from human raters.
For the transparent compression track, the requirement will be that the
compression quality is at least 40 dB (aggregated) PSNR; at least 0.993
(aggregated) MS-SSIM; and a reasonable quality level using the Butteraugli
measure (final values will be announced later). The submissions in this
track that are at or better than these quality thresholds will then be
evaluated for lowest total bitrate.
We provide the same two training datasets as we did last year: Dataset P
(“professional”) and Dataset M (“mobile”). The datasets are collected to be
representative for images commonly used in the wild, containing around two
thousand images. The challenge will allow participants to train neural
networks or other methods on any amount of data (it should be possible to
train on the data we provide, but we expect participants to have access to
additional data, such as ImageNet).
Participants will need to submit a file for each test image.
*Substantial prizes* will be given to the winners of the challenges. This
is possible thanks to the sponsors.
To ensure that the decoder is not optimized for the test set, we will
require the teams to use one of the decoders submitted in the validation
phase of the challenge.
Regular Paper Track
We will have a short (4 pages) regular paper track, which allows
participants to share research ideas related to image compression. In
addition to the paper, we will host a poster session during which authors
will be able to discuss their work in more detail.
We encourage exploratory research which shows promising results in:
● Lossy image compression
● Quantization (learning to quantize; dealing with quantization in
● Entropy minimization
● Image super-resolution for compression
● Compression artifact removal
● Inpainting (and compression by inpainting)
● Generative adversarial networks
● Perceptual metrics optimization and their applications to compression
And in particular, how these topics can improve image compression.
Challenge Paper Track
The challenge task participants are asked to submit a short paper (up to 4
pages) detailing the algorithms which they submitted as part of the
All deadlines are 23:59:59 PST.
- December 17th, 2018 Challenge announcement and the training part of
the dataset released
- January 8th, 2019 The validation part of the dataset released, online
validation server is made available.
- April 8th, 2019 Deadline for regular paper submission.
- April 17th, 2019 The test set is released.
- April 17th, 2019 Regular paper decision notification.
- April 24th, 2019 The competition closes and participants are expected
to have submitted their solutions along with the compressed versions of the
- May 8th, 2019 Deadline for challenge paper submission and factsheets.
- May 15th, 2019 Results are released to the participants.
- May 22rd, 2019 Challenge paper decision notification
- May 30th, 2019 Camera ready deadline (all papers)
Anne Aaron (Netflix)
Aaron Van Den Oord (Deepmind)
Jyrki Alakuijala (Google)
George Toderici (Google)
Michele Covell (Google)
Wenzhe Shi (Twitter)
Radu Timofte (ETH Zurich)
Lucas Theis (Twitter)
Johannes Ballé (Google)
Eirikur Agustsson (Google / ETH Zurich)
Nick Johnston (Google)
Fabian Mentzer (ETH Zurich)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the visionlist