<div dir="ltr"><h2 id="gmail-m_-3106810551377782697gmail-m_3196870025715677083gmail-m_-3589439788517615718gmail-m_4997760088122589999gmail-m_-759102982283371852gmail-m_6192047104826197723gmail-m_-5041924545427444610gmail-m_7927850025186882731gmail-:1vb" style="margin:0px;padding:0px 10px 0px 0px;border:0px;font-variant-ligatures:no-contextual;font-variant-numeric:inherit;font-variant-east-asian:inherit;font-weight:400;font-stretch:inherit;line-height:inherit;display:inline;outline:none;color:rgb(32,33,36)"><font face="times new roman, serif" size="4">One-week deadline for the upcoming special issue on IEEE T-MM.</font></h2><font size="4"><span style="background-color:transparent;font-family:"times new roman",serif">More detailed dates and scope of the special issue are listed below. </span><span style="font-family:"times new roman",serif">Looking forward to your contribution!</span></font><div><font face="times new roman, serif"><br></font></div><div><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><a name="m_-3106810551377782697_m_3196870025715677083_m_-3589439788517615718_OLE_LINK5"><b><span lang="EN-US" style="font-size:14pt;font-family:"Times New Roman",serif">SUMMARY:</span></b></a><b><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif"></span></b></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">With the goal of addressing fine-level image and video understanding tasks by learning from coarse-level human annotations, WSL is of particular importance in such a big data era as it can dramatically alleviate the human labor for annotating each of the structured visual/multimedia data and thus enables machines to learn from much larger-scaled data but with the equal annotation cost of the conventional fully supervised learning methods. More importantly, when dealing with the data from real-world application scenarios, such as the medical imaging data, remote sensing data, and audio-visual data, fine-level manual annotations are very limited and difficult to obtain. Under these circumstances, the WSL-based learning frameworks, specifically for the WSL-based multi-modality/multi-task learning frameworks, would bring great benefits. Unfortunately, designing effective WSL systems is challenging due to the issues of “semantic unspecificity” and “instance ambiguity”, where the former refers to the setting where the provided semantic label is at image level rather than specific instance-level while the latter refers to the ambiguity when determining an instance sample against the instance part or instance cluster. Principled solutions to address these problems are still under-studied. Nowadays, with the rapid development of advanced machine learning techniques, such as the Graph Convolutional Networks, Capsule Networks, Transformers, Generative Adversarial Networks, and Deep Reinforcement Learning models, new opportunities have emerged for solving the problems in WSL and applying WSL to richer vision and multimedia tasks. </span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">This special issue aims at promoting cutting-edge research along this direction and offers a timely collection of works to benefit researchers and practitioners. We welcome high-quality original submissions addressing both novel theoretical and practical aspects related to WSL, as well as the real-world applications based on WSL approaches.</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><b><span lang="EN-US" style="font-size:14pt;font-family:"Times New Roman",serif">SCOPE:</span></b></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Topics of interests include, but are not limited to:</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Multi-modality weakly supervised learning theory and framework;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Multi-task weakly supervised learning theory and framework;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Robust learning theory and framework;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Audio-visual learning under weak supervision;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Weakly supervised spatial/temporal feature learning;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Self-supervised learning frameworks and applications;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">   </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Graph Convolutional Networks/Graph Neural Networks-based weakly supervised learning frameworks;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Deep Reinforcement Learning for weakly supervised learning;</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt 18pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt">-<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">          </span></span><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Emerging vision and multimedia tasks with limited supervision;</span></p><p style="margin:0cm 0cm 0.0001pt 18pt;text-indent:0cm;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><b><span lang="EN-US" style="font-size:14pt;font-family:"Times New Roman",serif">IMPORTANT DATESs: </span></b><b><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif"> </span></b></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-indent:21pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Manuscript submission:           15<sup>th</sup> August 2021</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-indent:21pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Preliminary results:                  15<sup>th</sup> November 2021</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-indent:21pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Revisions due:                         1<sup>st</sup> January 2022</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-indent:21pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Notification:                             15<sup>th</sup> February 2022</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-indent:21pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Final manuscripts due:             15<sup>th</sup> March 2022</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-indent:21pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Anticipated publication:          Midyear 2022</span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif"> </span></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><b><span lang="EN-US" style="font-size:14pt;font-family:"Times New Roman",serif">SUBMISSION PROCEDURE:</span></b></p><p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;text-align:justify;font-size:10.5pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:12pt;font-family:"Times New Roman",serif">Papers should be formatted according to the IEEE Transactions on Multimedia guidelines for authors (see: <a href="http://www.signalprocessingsociety.org/tmm/tmm-author-info/" target="_blank">http://www.signalprocessingsociety.org/tmm/tmm-author-info/</a>). By submitting/resubmitting your manuscript to these Transactions, you are acknowledging that you accept the rules established for publication of manuscripts, including agreement to pay all over-length page charges, color charges, and any other charges and fees associated with publication of the manuscript. Manuscripts (both 1-column and 2-column versions are required) should be submitted electronically through the online IEEE manuscript submission system at <a href="http://mc.manuscriptcentral.com/tmm-ieee" target="_blank">http://mc.manuscriptcentral.com/tmm-ieee</a>. All submitted papers will go through the same review process as the regular TMM paper submissions. Referees will consider originality, significance, technical soundness, clarity of exposition, and relevance to the special issue topics above.</span></p></div><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">Dingwen Zhang<div><a href="https://zdw-nwpu.github.io/dingwenz.github.com/" target="_blank">https://zdw-nwpu.github.io/dingwenz.github.com/</a><br></div><div>Northwestern Polytechnical University</div></div></div></div></div></div>