<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Call for Challenge Participants & Call for Papers<div class=""><br class=""></div><div class="">The 3rd UG2+ Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition.</div><div class="">In conjunction with CVPR 2020, June 19, Seattle, USA.</div><div class=""><br class=""></div><div class="">Website: <a href="http://cvpr2020.ug2challenge.org/index.html" class="">http://cvpr2020.ug2challenge.org/index.html</a></div><div class="">Contact: <a href="mailto:cvpr2020.ug2challenge@gmail.com" class="">cvpr2020.ug2challenge@gmail.com</a></div><div class=""><br class=""></div><div class=""><b class="">Track 1: Object Detection in Poor Visibility Environments</b> [Register: <a href="https://forms.gle/dceUY9hyEsBADzuM6" class="">https://forms.gle/dceUY9hyEsBADzuM6</a>]</div><div class="">A dependable vision system must reckon with the entire spectrum of complex unconstrained and dynamic degraded outdoor environments. It is highly desirable to study to what extent, and in what sense, such challenging visual conditions can be coped with, for the goal of achieving robust visual sensing.<br class="">1) Object Detection in the Hazy & Rainy Condition<br class="">2) Face Detection in the Low-Light Condition<br class="">3) Sea Life Detection in the Underwater Condition</div><div class=""><br class=""></div><div class=""><b class="">Track 2: Face Verification on FlatCam Images</b> [Register: <a href="https://forms.gle/qmgESBvqA2pPEq28A" class="">https://forms.gle/qmgESBvqA2pPEq28A</a>]</div><div class="">Despite the easy integration into numerous computer vision applications, FlatCam lensless camera images contain noise and artifacts unseen in standard lens-based cameras, which degrades its performance. This track explores new algorithms to better integrate lensless cameras into the face verification task.<br class="">1) Image Enhancement for FlatCam Face Verification<br class="">2) Image Reconstruction for FlatCam Face Verification<br class="">3) End-to-End Face Verification on FlatCam Measurements</div><div class=""><br class=""></div><div class=""><div class=""><b class="">Paper Track:</b></div><div class="">• Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms, such as UAVs, gliders, autonomous cars, outdoor robots, etc.</div><div class="">• Novel algorithms for robust object detection and/or recognition in the presence of one or more real-world adverse conditions, such as haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.</div><div class="">• The potential models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography (image reconstruction, restoration, or enhancement) tasks and various high-level computer vision tasks.</div><div class="">• Novel physically grounded and/or explanatory models, for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.</div><div class="">• Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean “ground truth” to compare with.</div></div><div class="">Submission: <a href="https://cmt3.research.microsoft.com/UG2CHALLENGE2020" class="">https://cmt3.research.microsoft.com/UG2CHALLENGE2020</a></div><div class=""><br class=""></div><div class=""><b class="">Important Dates:</b><br class="">• Paper submission: March 20, 2020 (11:59PM PST)<br class="">• Challenge result submission: April 8, 2020 (11:59PM PST)<br class="">• Winner & Paper Announcement: April 10, 2020 (11:59PM PST)<br class="">• Camera ready deadline: April 16, 2020 (11:59PM PST)<br class="">• CVPR Workshop: June 19, 2020 (Full day)</div><div class=""><br class=""></div><div class=""><b class="">Speakers:</b></div><div class="">• Judy Hoffman (Georgia Institute of Technology)</div><div class="">• Xiaoming Liu (Michigan State University)</div><div class="">• Vishal M. Patel (Johns Hopkins University)</div><div class="">• Zhiding Yu (NVIDIA)<br class="">• Dengxin Dai (ETH Zurich)<br class="">• Bihan Wen (Nanyang Technological University (NTU), Singapore)<br class="">• Honghui Shi (University of Oregon)<br class="">• Xi Yin (Microsoft Cloud and AI)</div><div class=""><br class=""></div><div class=""><b class="">Organizers:</b></div><div class="">• Zhangyang Wang (Texas A&M University)<br class="">• Walter J. Scheirer (University of Notre Dame)<br class="">• Ashok Veeraraghavan (Rice University)<br class="">• Jiaying Liu (Peking University)</div><div class="">• Risheng Liu (Dalian University of Technology)<br class="">• Wenqi Ren (Chinese Academy of Sciences)<br class="">• Wenhan Yang (City University of Hong Kong, Hong Kong)<br class="">• Yingyan Lin (Rice University)<br class="">• Ye Yuan (Texas A&M University)<br class="">• Jasper Tan (Rice University)<br class="">• Wuyang Chen (Texas A&M University)</div></body></html>