<div dir="ltr"><div style="font-size:12.8px">Dear all:</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">  The submission deadline is 31. Aug. 2017.</div><div style="font-size:12.8px"><span style="font-size:12.8px"><br></span></div><div style="font-size:12.8px"><span style="font-size:12.8px">Multimedia Tools and Applications</span><br></div><div style="font-size:12.8px"><div style="font-size:12.8px">Special Issue on Few-shot Learning for Multimedia Content Understanding</div><div><span style="font-size:12.8px"><a href="http://static.springer.com/sgw/documents/1600873/application/pdf/CFP_Few-shot+Learning+for+Multimedia+Content+Understanding.pdf" target="_blank">http://static.springer.com/sgw<wbr>/documents/1600873/application<wbr>/pdf/CFP_Few-shot+Learning+<wbr>for+Multimedia+Content+<wbr>Understanding.pdf</a></span><br></div></div><div style="font-size:12.8px"><span style="font-size:12.8px"><br></span></div><div style="font-size:12.8px"><p class="MsoNormal" style="margin-top:6pt"><b><span lang="EN-US">Overview</span></b></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">The multimedia analysis and machine learning communities have long attempted to build models for understanding real-world applications. Driven by the innovations in the architectures of deep convolutional neural network (CNN), tremendous improvements on object recognition and visual understanding have been witnessed in the past few years. However, it should be noticed that the success of current systems relies heavily on a lot of manually labeled noise-free training data, typically several thousand examples for each object class to be learned, like ImageNet. Although it is feasible to build learning systems this way for common categories, recognizing objects “in the wild” is still very challenging. In reality, many objects follow a long-tailed distribution: they do not occur frequently enough to collect and label a large set of representative exemplars in contrast to common objects. For example, in some real-world applications, such as anomalous object detection in a video surveillance scenario, it is difficult to collect sufficient positive samples because they are “anomalous” as defined, and fine-grained object recognition, annotating fine-grained labels requires expertise such that the labeling expense is prohibitively costly.</span></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">The expensive labeling cost motivates the researchers to develop learning techniques that utilize only a few noise-free labeled data for model training. Recently, some few-shot learning, including the most challenging task zero-shot learning, approaches have been proposed to reduce the number of necessary labeled samples by transferring knowledge from related data sources. In the view of the promising results reported by these works, it is fully believed that the few-shot learning has strong potential to achieve comparable performance with the sufficient-shot learning techniques and significantly save the labeling efforts. There still remains some important problems. For example, a general theoretical framework for few-shot learning is not established, the generalized few-shot learning which recognizes common and uncommon objects simultaneously is not well investigated, and how to perform online few-shot learning is also an open issue.</span></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">The primary goal of this special issue is to invite original contributions reporting the latest advances in few-shot learning for multimedia (e.g., text, video and audio) content understanding towards addressing these challenges, and to provide the opportunity for researchers and product developers to discuss the state-of-the-art and trends of few-shot learning for building intelligent systems. The topics of interest include, but are not limited to:<br></span><b><span lang="EN-US">Topics</span></b><span lang="EN-US"></span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning theory;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Novel machine learning techniques for few-shot/zero-shot learning;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Generalized few-shot/zero-shot learning;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Online few-shot/zero-shot learning;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning with deep CNN;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning with transfer learning;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning with noisy data;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot learning with actively data annotation (active learning);</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning for fine-grained object recognition;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning for anomaly detection;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Few-shot/zero-shot learning for visual feature extraction;</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Applications in object recognition and visual understanding with few-shot learning;</span></p><p class="MsoNormal" style="margin-top:6pt"><b><span lang="EN-US">Important Dates</span></b></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Manuscript submission deadline: 31 August 2017</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Notification of acceptance: 30 Nov 2017</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Submission of final revised manuscript due: 31 Dec 2017</span></p><p class="gmail-m_2230791301371779072gmail-m_-2242714413022372461gmail-MsoListParagraph" style="margin:6pt 0cm 0pt 21pt"><span lang="EN-US" style="font-family:Symbol;font-size:10pt">·<span style="line-height:normal;font-family:"Times New Roman";font-size:7pt;font-stretch:normal">           </span></span><span lang="EN-US" style="font-size:10pt">Publication of special issue: TBD  </span></p><p class="MsoNormal" style="margin-top:6pt"><b><span lang="EN-US">Submission Procedure</span></b></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">All the <span class="gmail-il">papers</span> should be full journal length versions and follow the guidelines set out by Multimedia Tools and Applications (<a href="http://www.springer.com/computer/information+systems/journal/11042" target="_blank">http://www.springer.com/compu<wbr>ter/information+systems/journa<wbr>l/11042</a>). </span></p><p class="MsoNormal" style="margin-top:6pt">Manuscripts should be submitted online at <a href="http://mtap.editorialmanager.com/" target="_blank">http://mtap.<wbr>editorialmanager.com</a> choosing “1079 – Few-Shot Learning for MM Content Understanding” as article type, no later than 31 August, 2017. All the <span class="gmail-il">papers</span> will be peer-reviewed following the MTAP reviewing procedures. <br></p><p class="MsoNormal" style="margin-top:6pt"><b><span lang="EN-US">Guest Editors</span></b></p><p class="MsoNormal" style="margin-top:6pt"><b><span lang="EN-US" style="font-size:10pt">Dr. Guiguang Ding</span></b></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">E-mail: <a href="mailto:dinggg@tsinghua.edu.cn" target="_blank">dinggg@tsinghua.edu.cn</a></span></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">Affiliation: Tsinghua University, China</span></p><p class="MsoNormal" style="margin-top:6pt"><b><span style="font-size:10pt">Dr. Jungong Han</span></b></p><p class="MsoNormal" style="margin-top:6pt"><span style="font-size:10pt">E-mail: </span><span lang="EN-US"><a href="mailto:jungong.han@Northumbria" target="_blank"><span style="color:windowtext;font-size:10pt">jungong.han@<wbr>Northumbria</span></a></span><span lang="EN-US" style="font-size:10pt">. <a href="http://ac.uk/" target="_blank">ac.uk</a></span><span lang="EN-US" style="font-size:10pt"></span><span style="font-size:10pt"></span></p><p class="MsoNormal" style="margin-top:6pt"><span lang="EN-US" style="font-size:10pt">Affiliation: Northumbria University at Newcastle, UK</span></p><p class="MsoNormal" style="margin-top:6pt"><b><span lang="DE" style="font-size:10pt">Dr. Eric Pauwels</span></b></p><p class="MsoNormal" style="margin-top:6pt"><span lang="DE" style="font-size:10pt">E-mail: <a href="mailto:eric.pauwels@cwi.nl" target="_blank">eric.pauwels@cwi.nl</a></span></p><p class="MsoNormal" style="margin-top:6pt"><span lang="DE" style="font-size:10pt">Affiliation: </span><span lang="EN-US" style="font-size:10pt">Centrum Wiskunde & Informatica (CWI), Netherlands</span></p></div></div>