[visionlist] Call for Papers: Multimedia Tools and Applications SI: Few-shot Learning for Multimedia Content Understanding

jungong han jungonghan77 at gmail.com
Wed Jun 14 11:36:59 -05 2017


Dear all:

  The submission deadline is 31. Aug. 2017.

Multimedia Tools and Applications
Special Issue on Few-shot Learning for Multimedia Content Understanding
http://static.springer.com/sgw/documents/1600873/
application/pdf/CFP_Few-shot+Learning+for+Multimedia+
Content+Understanding.pdf

*Overview*

The multimedia analysis and machine learning communities have long
attempted to build models for understanding real-world applications. Driven
by the innovations in the architectures of deep convolutional neural
network (CNN), tremendous improvements on object recognition and visual
understanding have been witnessed in the past few years. However, it should
be noticed that the success of current systems relies heavily on a lot of
manually labeled noise-free training data, typically several thousand
examples for each object class to be learned, like ImageNet. Although it is
feasible to build learning systems this way for common categories,
recognizing objects “in the wild” is still very challenging. In reality,
many objects follow a long-tailed distribution: they do not occur
frequently enough to collect and label a large set of representative
exemplars in contrast to common objects. For example, in some real-world
applications, such as anomalous object detection in a video surveillance
scenario, it is difficult to collect sufficient positive samples because
they are “anomalous” as defined, and fine-grained object recognition,
annotating fine-grained labels requires expertise such that the labeling
expense is prohibitively costly.

The expensive labeling cost motivates the researchers to develop learning
techniques that utilize only a few noise-free labeled data for model
training. Recently, some few-shot learning, including the most challenging
task zero-shot learning, approaches have been proposed to reduce the number
of necessary labeled samples by transferring knowledge from related data
sources. In the view of the promising results reported by these works, it
is fully believed that the few-shot learning has strong potential to
achieve comparable performance with the sufficient-shot learning techniques
and significantly save the labeling efforts. There still remains some
important problems. For example, a general theoretical framework for
few-shot learning is not established, the generalized few-shot learning
which recognizes common and uncommon objects simultaneously is not well
investigated, and how to perform online few-shot learning is also an open
issue.

The primary goal of this special issue is to invite original contributions
reporting the latest advances in few-shot learning for multimedia (e.g.,
text, video and audio) content understanding towards addressing these
challenges, and to provide the opportunity for researchers and product
developers to discuss the state-of-the-art and trends of few-shot learning
for building intelligent systems. The topics of interest include, but are
not limited to:
*Topics*

·           Few-shot/zero-shot learning theory;

·           Novel machine learning techniques for few-shot/zero-shot
learning;

·           Generalized few-shot/zero-shot learning;

·           Online few-shot/zero-shot learning;

·           Few-shot/zero-shot learning with deep CNN;

·           Few-shot/zero-shot learning with transfer learning;

·           Few-shot/zero-shot learning with noisy data;

·           Few-shot learning with actively data annotation (active
learning);

·           Few-shot/zero-shot learning for fine-grained object recognition;

·           Few-shot/zero-shot learning for anomaly detection;

·           Few-shot/zero-shot learning for visual feature extraction;

·           Applications in object recognition and visual understanding
with few-shot learning;

*Important Dates*

·           Manuscript submission deadline: 31 August 2017

·           Notification of acceptance: 30 Nov 2017

·           Submission of final revised manuscript due: 31 Dec 2017

·           Publication of special issue: TBD

*Submission Procedure*

All the papers should be full journal length versions and follow the
guidelines set out by Multimedia Tools and Applications (
http://www.springer.com/computer/information+systems/journal/11042).

Manuscripts should be submitted online at
http://mtap.editorialmanager.com choosing
“1079 – Few-Shot Learning for MM Content Understanding” as article type, no
later than 31 August, 2017. All the papers will be peer-reviewed following
the MTAP reviewing procedures.

*Guest Editors*

*Dr. Guiguang Ding*

E-mail: dinggg at tsinghua.edu.cn

Affiliation: Tsinghua University, China

*Dr. Jungong Han*

E-mail: jungong.han at Northumbria. ac.uk

Affiliation: Northumbria University at Newcastle, UK

*Dr. Eric Pauwels*

E-mail: eric.pauwels at cwi.nl

Affiliation: Centrum Wiskunde & Informatica (CWI), Netherlands
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20170614/760aa056/attachment-0001.html>


More information about the visionlist mailing list