[visionlist] ECCV2020 Workshop On Real-world Recognition (RLQ-TOD20)

yuqian zhou zhouyuqian133 at gmail.com
Mon May 11 21:42:18 -04 2020


Workshop Website: https://rlq-tod.github.io/

How is the robustness of the current state-of-the-art for recognition and
detection algorithms in non-ideal visual environments? While the visual
recognition research has made tremendous progress in recent years, most
models are trained, applied, and evaluated on high-quality (HQ) visual
data. However, in many emerging applications such as robotics and
autonomous driving, the performances of visual sensing and analytics are
largely jeopardized by low-quality(LQ) visual data acquired from
unconstrained environments, suffering from various types of degradation
such as low resolution, noise, occlusion, motion blur, contrast,
brightness, sharpness, out-of-focus etc. We are organizing the 2nd RLQ
workshop in conjunction with ECCV 2020 to provide an integrated forum for
both low-level and high-level vision researchers to review the recent
progress of robust recognition models from LQ visual data and the novel
image restoration algorithms. You could contribute to our workshop in three
aspects:

[1. Paper Submission] <https://rlq-tod.github.io/callforpapers.html>Researchers
are encouraged to submit either full-paper or half-baked abstract to our
workshop.

[2. TOD Challenge] <https://rlq-tod.github.io/challenge1.html>In
conjunction with the workshop, we will hold the 1st Tiny Object Detection
(TOD) Challenge. This challenge targets at establishing a baseline for tiny
person detection by presenting a new benchmark and various approaches,
opening up a promising direction for tiny object detection in the wild. The
new benchmark, named TinyPerson, spans challenges including extreme
low-resolution, background diversity, multi-objects, part-invisibility, and
various complex backgrounds that are far beyond those in existing datasets.

[3. UDC Challenge] <https://rlq-tod.github.io/challenge2.html>We will also
hold the first image restoration challenge on Under-Display Camera (UDC).
The new trend of full-screen devices encourages us to position a camera
behind a screen. Removing the bezel and centralizing the camera under the
screen brings larger display-to-body ratio and enhances eye contact in
video chat, but also causes image degradation. we focus on a newly-defined
Under-Display Camera (UDC), as a novel real-world single image restoration
problem. We will release the UDC dataset for training and testing, and rank
the algorithms according to the image recovery performance.

For the participants of the challenge, prize and possible internship
opportunities will be awarded. Top-ranked authors will be invited to
co-author the challenge report and contribute another workshop paper
describing the outstanding algorithms.

For inquiry, please send emails to one of the following addresses:

   - *official email:* rlqtodeccvw2020 at 163.com
   - *Mr. Yuqian Zhou:* zhouyuqian133 at gmail.com
   - *Dr. Zhenjun Han:* hanzhj at ucas.ac.cn


--------------------------------------------------------------------------------------------------
Ph.D Candidate of ECE Department, Beckman Institute
*University of Illinois at Urbana-Champaign (UIUC) *
Phone: +1 (217) 607 3041
Email: zhouyuqian133 at gmail.com, yuqian2 at illinois.edu, yzhouas at ust.hk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200511/5d7476f7/attachment.html>


More information about the visionlist mailing list