<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><p class="">The 4th UG2+ Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition.</p><p class="">In conjunction with CVPR 2021, June 19</p><p class="">Website: <a href="http://cvpr2020.ug2challenge.org/index.html" class=""></a><a href="http://cvpr2020.ug2challenge.org/index.html" class="">http://www.ug2challenge.org/</a></p><p class="">Contact: <a href="mailto:cvpr2021.ug2challenge@gmail.com" class="">cvpr2021.ug2challenge@gmail.com</a></p><p class=""><strong class="">Track 1: </strong><b class="">Object Detection in Poor Visibility Environments</b> [Register: <a href="https://forms.gle/dceUY9hyEsBADzuM6" class=""></a><a href="https://forms.gle/dceUY9hyEsBADzuM6" class="">https://forms.gle/Yf853QgPL5xCUy5k6</a>]</p><p class="">A dependable vision system must reckon with the entire spectrum of complex unconstrained and dynamic degraded outdoor environments. It is highly desirable to study to what extent, and in what sense, such challenging visual conditions can be coped with, for the goal of achieving robust visual sensing.</p><p class=""><strong class="">Track 2: </strong><b class="">Action Recognition from Dark Videos</b> [Register: <a href="https://forms.gle/qmgESBvqA2pPEq28A" class=""></a><a href="https://forms.gle/qmgESBvqA2pPEq28A" class="">https://forms.gle/qJZ7rdt44iMmBgci6</a>]</p><p class="">Videos shot under adverse illumination are unavoidable, such as night surveillance, and self-driving at night. It is therefore highly desirable to explore robust methods to cope with dark scenarios. It would be even better if such methods could utilize web videos, which are widely available and normally shot under poor illumination.</p><p class=""><strong class="">Paper Track:</strong></p><ul class=""><li class="">Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms, such as UAVs, gliders, autonomous cars, outdoor robots, etc.</li><li class="">Novel algorithms for robust object detection and/or recognition in the presence of one or more real-world adverse conditions, such as haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.</li><li class="">The potential models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography (image reconstruction, restoration, or enhancement) tasks and various high-level computer vision tasks.</li><li class="">Novel physically grounded and/or explanatory models, for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.</li><li class="">Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean “ground truth” to compare with.</li></ul><p class="">Submission: <a href="https://cmt3.research.microsoft.com/UG2CHALLENGE2021" class="">https://cmt3.research.microsoft.com/UG2CHALLENGE2021</a></p><p class=""><strong class="">Important Dates:</strong></p><ul class=""><li class="">Paper submission: April 5, 2021 (11:59PM PST)</li><li class="">Camera ready deadline: April 16, 2021 (11:59PM PST)</li><li class="">Challenge result submission: May 1, 2021 (11:59PM PST)</li><li class="">Winner Announcement: May 20, 2021 (11:59PM PST)</li><li class="">CVPR Workshop: June 19, 2021 (Full day)</li></ul><div class=""><br class="webkit-block-placeholder"></div><p class=""><strong class="">Speakers:</strong></p><ul class=""><li class="">Raquel Urtasun (University of Toronto, Uber ATG)</li><li class="">Peyman Milanfar (Google Research)</li><li class="">Chelsea Finn (Stanford University, Google)</li><li class="">Stanley H. Chan (Purdue University)</li><li class="">Yunchao Wei (University of Technology Sydney)</li><li class="">Bihan Wen (Nanyang Technological University (NTU), Singapore)</li><li class="">Sifei Liu (NVIDIA)</li><li class="">Shanghang Zhang (University of California, Berkeley)</li></ul><p class=""><strong class="">Organizers:</strong></p><ul class=""><li class="">Wuyang Chen (UT Austin)</li><li class="">Zhangyang Wang (UT Austin)</li><li class="">Vishal M. Patel (Johns Hopkins University)</li><li class="">Jiaying Liu (Peking University)</li><li class="">Walter J. Scheirer (University of Notre Dame)</li><li class="">Danna Gurari (UT Austin)</li><li class="">Wenqi Ren (Chinese Academy of Sciences)</li><li class="">Shalini De Mello (NVIDIA)</li><li class="">Wenhan Yang (City University of Hong Kong)</li><li class="">Yuecong Xu (Nanyang Technological University, Singapore)</li><li class="">Jianxiong Yin (NVIDIA AI Tech Center)</li></ul><div class=""><br class=""></div><div class=""></div></body></html>