[visionlist] CFP - CVPR 2021 "Learning from Limited and Imperfect Data" (L2ID) Workshop & Challenges

Tatiana Tommasi tommasi.t at gmail.com
Fri Feb 5 12:48:06 -04 2021


*Learning from Limited and Imperfect Data (L2ID) Workshop & Challenges*

In conjunction with the Computer Vision and Pattern Recognition Conference
(CVPR) 2021
June 19-25 2021, Virtual Online

https://l2id.github.io/

******************************
*CALL FOR PAPERS & CHALLENGE PARTICIPATION*

Learning from limited or imperfect data (L^2ID) refers to a variety of
studies that attempt to address challenging pattern recognition tasks by
learning from limited, weak, or noisy supervision. Supervised learning
methods, including Deep Convolutional Neural Networks, have significantly
improved the performance of many problems in the field of computer vision.
However, these approaches are notoriously "data hungry", which makes them
sometimes not practical in many real-world industrial applications. The
issue of availability of large quantities of labeled data becomes even more
severe when considering visual classes that require annotation based on
expert knowledge (e.g., medical imaging), classes that rarely occur, or
object detection and instance segmentation tasks where the labeling
requires more effort. To address this problem, many efforts have been made
to improve robustness to this scenario. The goal of this workshop is to
bring together researchers to discuss emerging new technologies related to
visual learning with limited or imperfectly labeled data.

We will have two groups of challenges this year, including for localization
and few-shot classification. Check the website for all the L2ID challenges:

Localization:
Track 1 - Weakly Supervised Semantic Segmentation
Track 2 - Weakly supervised product detection and retrieval
Track 3 - Weakly-supervised Object Localization
Track 4 - High-resolution Human Parsing

Few Shot Classification:
Track 1 - Cross Domain, small scale
Track 2 - Cross Domain, large scale
Track 3 - Cross Domain, larger number of classes

******************************
*TOPICS*

• Few-shot learning for image classification, object detection, etc.
• Cross-domain few-shot learning
• Weakly-/semi-supervised learning algorithms
• Zero-shot learning, Learning in the “long-tail” scenario
• Self-supervised learning and unsupervised representation learning
• Learning with noisy data
• Any-shot learning – transitioning between few-shot, mid-shot, and
many-shot training
• Optimal data and source selection for effective meta-training with a
known or unknown set of target categories
• Data augmentation
• New datasets and metrics to evaluate the benefit of such methods
• Real world applications such as object semantic
segmentation/detection/localization, scene parsing, video processing (e.g.
action recognition, event detection, and object tracking)

This is not a closed list, we welcome other interesting and relevant
research for L^2ID.

******************************
*IMPORTANT DATES*

Paper submission deadline: March 25th, 2021
Notification to authors: April 8th, 2021
Camera-ready deadline: April 20th, 2021

The contributions can have two formats
- Extended Abstracts of max 4 pages (excluding references)
- Papers of the same lenght of CVPR submissions

We encourage authors who wants to present and discuss their ongoing work to
choose the Extended Abstract format.
According to the CVPR rules, extended abstracts will not count as archival.

The submissions should be uploaded through CMT:
https://cmt3.research.microsoft.com/LLID2021

******************************
*WORKSHOP ORGANIZERS:*
Zsolt Kira (Georgia Tech, USA)
Shuai (Kyle) Zheng (Dawnlight Technologies Inc, USA)
Noel C. F. Codella (Microsoft, USA)
Yunchao Wei (University of Technology Sydney, AU)
Tatiana Tommasi (Politecnico di Torino, IT)
Ming-Ming Cheng (Nankai University, CN)
Judy Hoffman (Georgia Tech, USA)
Antonio Torralba (MIT, USA)
Xiaojuan Qi (University of Hong Kong, HK)
Sadeep Jayasumana (Google, USA)
Hang Zhao (MIT, USA)
Liwei Wang (Chinese University of Hong Kong, HK)
Yunhui Guo (UC Berkeley/ICSI, USA)
Lin-Zhuo Chen (Nankai University, CN)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20210205/612b6e24/attachment.html>


More information about the visionlist mailing list