<div dir="ltr"><div><b>3rd Workshop on</b> <b>Learning with Limited Labelled Data for Image and Video Understanding, (L3D-IVU) </b>in conjunction with IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.</div><div><br></div><div><br></div><div>*** FULL PAPER SUBMISSION DEADLINE: March 6th 2024 23:59 PST ***</div><div><div><br></div><div>Workshop Website: <a href="https://sites.google.com/view/l3divu2024/call-for-papers" target="_blank">https://sites.google.com/view/l3divu2024/call-for-papers</a></div><div>CMT3 Submission: <a href="https://cmt3.research.microsoft.com/L3DIVUCVPR2024/" target="_blank"><span style="box-sizing:border-box;color:rgb(34,34,34);text-decoration:none;font-family:"Open Sans";text-align:justify"><u>https://cmt3.research.microsoft.com/L3DIVUCVPR2024/</u></span><span style="box-sizing:border-box;font-family:"Open Sans";font-size:17.333334px;text-align:justify;color:rgb(0,0,0)"> </span></a></div><div><br></div><div><b>CALL FOR PAPERS:</b></div><div><p dir="ltr" style="text-align:justify"><span style="font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal">We encourage submissions that are under one of the topics of interest, but also we welcome other interesting and relevant research for learning with limited labelled data.</span></p><ul style="list-style-type:square;margin-left:0px;margin-right:0px;padding:0px"><li dir="ltr" style="margin-left:15pt"><ul style="list-style-type:square;box-sizing:border-box;padding:0px;margin:6px 0px 0px;color:rgb(0,0,0);font-family:sans-serif"><li dir="ltr" style="margin:0px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0;padding-top:0px"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Few-Shot classification, detection and segmentation in still images and video, including objects, actions, scenes and object tracking.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Zero-shot learning in video understanding.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Video and language modelling.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Self supervised Learning in video related tasks.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Weakly/Semi supervised learning in video understanding.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Transfer Learning.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">Open-set learning.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box">New benchmarks and metrics.</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:currentcolor;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-family:"Open Sans";line-height:0;padding-bottom:0px"><p dir="ltr" role="presentation" style="box-sizing:border-box;font-variant-ligatures:none;margin:0px 0px 0px 0pt;outline:currentcolor;text-decoration:inherit;font-style:inherit;line-height:1.6;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-align:justify;text-indent:0pt"><span style="box-sizing:border-box;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal">Real-world applications discussing the societal impact of few-shot learning.</span></p></li></ul></li></ul><p dir="ltr"><span style="font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal">Accepted papers will be presented at the poster session, some as orals and there will be paper/s awarded best paper award.</span></p><p dir="ltr"><span style="font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal"><br></span></p><p dir="ltr"><span style="font-family:"Open Sans";vertical-align:baseline"><strong>Submission Guidelines:</strong></span></p><ul style="list-style-type:square;margin-left:0px;margin-right:0px;padding:0px"><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">We accept submissions of </span><span style="color:rgb(0,0,0);font-family:"Open Sans""><strong>max 8 pages</strong></span><span style="color:rgb(0,0,0)"> (excluding references). We encourage authors to submit 4 page work as well.</span></p></li><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">We accept dual submissions to </span><span style="color:rgb(0,0,0);font-family:"Open Sans""><strong>CVPR 2024</strong></span><span style="color:rgb(0,0,0)"> and </span><span style="color:rgb(0,0,0);font-family:"Open Sans""><strong>L3D-IVU 2024</strong></span><span style="color:rgb(0,0,0)">.</span></p></li><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">Submitted manuscripts should follow the </span><span style="text-decoration:underline"><a href="https://github.com/cvpr-org/author-kit/archive/refs/tags/CVPR2024-v2.zip" target="_blank">CVPR 2024 paper template</a></span><span style="color:rgb(0,0,0)">.</span></p></li></ul><ul style="list-style-type:square;margin-left:0px;margin-right:0px;padding:0px"><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">Submissions will be rejected without review if they:</span></p><ol style="margin-left:0px;margin-right:0px;padding:0px"><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">Contain more than 8 pages (excluding references).</span></p></li><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">Violate the double-blind policy.</span></p></li><li dir="ltr" style="margin-left:15pt"><p dir="ltr" style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">Violate the dual-submission policy for papers with more than 4 pages excluding references.</span></p></li></ol></li></ul><ul style="list-style-type:square;margin-left:0px;margin-right:0px;padding:0px"><li style="margin-left:15pt"><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">The accepted papers will be linked at the workshop webpage. It will also be in the main conference proceedings if the authors agree (this option is valid only for </span><span style="color:rgb(0,0,0);font-family:"Open Sans""><strong>full-length papers not published at CVPR 2024</strong></span><span style="color:rgb(0,0,0)">)</span></p></li><li style="margin-left:15pt"><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><span style="color:rgb(0,0,0)">Papers will be peer reviewed under double-blind policy, and must be submitted online through the CMT submission system.</span></p><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><br></p><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px"><br></p><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px">Best regards,</p><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px">Mennatullah Siam, PhD</p><p style="border-color:currentcolor;border-style:none;border-width:medium;line-height:1.38;margin-bottom:0px;margin-left:0px;margin-top:0px;padding:0px">Ontario Tech University, Canada.</p></li></ul></div></div></div>