<div dir="ltr"><b>Learning from Limited and Imperfect Data (L2ID) Workshop & Challenges</b><br><br>In conjunction with the Computer Vision and Pattern Recognition Conference (CVPR) 2021<br>June 19-25 2021, Virtual Online<br><br><a href="https://l2id.github.io/" target="_blank">https://l2id.github.io/</a><br><br>******************************<br><b>CALL FOR PAPERS & CHALLENGE PARTICIPATION</b><br><br>Learning
from limited or imperfect data (L^2ID) refers to a variety of studies
that attempt to address challenging pattern recognition tasks by
learning from limited, weak, or noisy supervision. Supervised learning
methods, including Deep Convolutional Neural Networks, have
significantly improved the performance of many problems in the field of
computer vision. However, these approaches are notoriously "data
hungry", which makes them sometimes not practical in many real-world
industrial applications. The issue of availability of large quantities
of labeled data becomes even more severe when considering visual classes
that require annotation based on expert knowledge (e.g., medical
imaging), classes that rarely occur, or object detection and instance
segmentation tasks where the labeling requires more effort. To address
this problem, many efforts have been made to improve robustness to this
scenario. The goal of this workshop is to bring together researchers to
discuss emerging new technologies related to visual learning with
limited or imperfectly labeled data.<br><br>We will have two groups of
challenges this year, including for localization and few-shot
classification. Check the website for all the L2ID challenges:<br><br>Localization:<br>Track 1 - Weakly Supervised Semantic Segmentation<br>Track 2 - Weakly supervised product detection and retrieval<br>Track 3 - Weakly-supervised Object Localization<br>Track 4 - High-resolution Human Parsing<br><br>Few Shot Classification:<br>Track 1 - Cross Domain, small scale<br>Track 2 - Cross Domain, large scale<br>Track 3 - Cross Domain, larger number of classes<br><br>******************************<br><b>TOPICS</b><br><br>• Few-shot learning for image classification, object detection, etc.<br>• Cross-domain few-shot learning<br>• Weakly-/semi-supervised learning algorithms<br>• Zero-shot learning, Learning in the “long-tail” scenario<br>• Self-supervised learning and unsupervised representation learning<br>• Learning with noisy data<br>• Any-shot learning – transitioning between few-shot, mid-shot, and many-shot training<br>• Optimal data and source selection for effective meta-training with a known or unknown set of target categories<br>• Data augmentation<br>• New datasets and metrics to evaluate the benefit of such methods<br>• Real world applications such as object semantic segmentation/detection/localization, scene parsing, video processing (e.g. action recognition, event detection, and object tracking)<br><br>This is not a closed list, we welcome other interesting and relevant research for L^2ID.<br><br>******************************<br><b>IMPORTANT DATES</b><br><br>Paper submission deadline: <b>March 25th, 2021</b><br>Notification to authors: April 8th, 2021<br>Camera-ready deadline: April 20th, 2021<br><br>The contributions can have two formats<br>- Extended Abstracts of max 4 pages (excluding references)<br>- Papers of the same lenght of CVPR submissions<br><br>We encourage authors who wants to present and discuss their ongoing work to choose the Extended Abstract format.<br>According to the CVPR rules, extended abstracts will not count as archival.<br><br>The submissions should be uploaded through CMT: <a href="https://cmt3.research.microsoft.com/LLID2021" target="_blank">https://cmt3.research.microsoft.com/LLID2021</a><br><br>******************************<br><b>WORKSHOP ORGANIZERS:</b><br>Zsolt Kira (Georgia Tech, USA)<br>Shuai (Kyle) Zheng (Dawnlight Technologies Inc, USA)<br>Noel C. F. Codella (Microsoft, USA)<br>Yunchao Wei (University of Technology Sydney, AU)<br>Tatiana Tommasi (Politecnico di Torino, IT)<br>Ming-Ming Cheng (Nankai University, CN)<br>Judy Hoffman (Georgia Tech, USA)<br>Antonio Torralba (MIT, USA)<br>Xiaojuan Qi (University of Hong Kong, HK)<br>Sadeep Jayasumana (Google, USA)<br>Hang Zhao (MIT, USA)<br>Liwei Wang (Chinese University of Hong Kong, HK)<br>Yunhui Guo (UC Berkeley/ICSI, USA)<br>Lin-Zhuo Chen (Nankai University, CN)</div>