[visionlist] ECCV Satellite event and TPAMI Special Issue on Inpainting and Denoising in Looking at People
Sergio Escalera
sergio.escalera.guerrero at gmail.com
Fri Jun 22 21:31:47 -05 2018
*Chalearn Satellite Workshop **on Image and Video Inpainting **@ECCV18*
-------------------------------------------
Call for Participation: ChaLearn Looking at People Inpainting and Denoising
in the Deep Learning Age events:
*Challenge and ECCV 2018 Satellite Event - Registration FREE*
*Associated Springer book chapter publication and IEEE TPAMI Special Issue*
Sponsoring: prizes from Google, Disney Research, Amazon, and ChaLearn
Sep. 9th 2018, Munich,
*https://www.hi-hotel-muenchen.de/en/munich-conference-hotel/*
<https://www.hi-hotel-muenchen.de/en/munich-conference-hotel/>, 130m from
main ECCV venue.
Competition webpage:
*http://chalearnlap.cvc.uab.es/challenge/26/description/*
<http://chalearnlap.cvc.uab.es/challenge/26/description/>
ECCV Satellite event webpage:
*http://chalearnlap.cvc.uab.es/workshop/29/description/*
<http://chalearnlap.cvc.uab.es/workshop/29/description/>
IEEE TPAMI Special Issue webpage:
*http://chalearnlap.cvc.uab.es/special-issue/30/description/*
<http://chalearnlap.cvc.uab.es/special-issue/30/description/>
Contact: sergio.escalera.guerrero at gmail.com
************************************************************************
*Aims and scope: * The problem of dealing with missing data or incomplete
data in machine learning arises in many applications. Recent strategies
make use of generative models to impute missing or corrupted data. Advances
in computer vision using deep generative models have found applications in
image/video processing, such as denoising [1], restoration [2],
super-resolution [3], or inpainting [4,5]. We focus on image and video
inpainting tasks, that might benefit from novel methods such as Generative
Adversarial Networks (GANs) [6,7] or Residual connections [8,9]. Solutions
to the inpainting problem may be useful in a wide variety of computer
vision tasks. We chose three examples: *human pose estimation* and *video
de-captioning *and *fingerprint denoising*.
*1- Human pose estimation*: it is challenging to perform human pose
recognition in images containing occlusions; since tackling human pose
recognition is a prerequisite for human behaviour analysis in many
applications, replacing occluded parts may help the whole processing chain.
*2- video de-captioning*: in the context of news media and video
entertainment, broadcasting programs from various languages, such as news,
series or documentaries, there are frequently text captions or encrusted
commercials or subtitles, which reduce visual attention and occlude parts
of frames that may decrease the performance of automatic understanding
systems. Despite recent advances in machine learning, it is still
challenging to aim at fast (real time) and accurate automatic text removal
in video sequences.
*3- fingerprint denoising*: biometrics play an increasingly important role
in security to ensure privacy and identity verification, as evidenced by
the increasing prevalence of fingerprint sensors on mobile devices.
Fingerprint retrieval keeps also being an important law enforcement tool
used in forensics. However, much remains to be done to improve the accuracy
of verification, both in terms of false negatives (in part due to poor
image quality when fingers are wet or dirty) and in terms of false
positives due to the ease of forgery.
As one of the important branches in image and video analysis of humans
(named Looking at People), understanding and inpainting occluded parts have
become a research area of great interest as it has many potential
applications domains including human behavior analysis, augmented reality
and biometry recognition. We propose a *satellite *workshop on image and
video inpainting. This session aims at compiling the latest efforts and
research advances from the scientific community in enhancing traditional
computer vision and pattern recognition algorithms with human image
inpainting, video decaptioning and fingerprint denoising at both the
learning and prediction stages.
*Workshop topics and guidelines: *The scope of the workshop comprises all
aspects of image and video inpainting and denoising. Including but not
limited to the following topics:
- 2D/3D human pose recovery under occlusion,
- human inpainting,
- human retexturing,
- video decaptioning,
- temporal occlusion recovery,
- object recognition under occlusion,
- fingerprint recognition,
- fingerprint denoising,
- future frame video prediction,
- unsupervised learning for missing data recovery and/or denoising,
- new data and applications of inpainting and/or denoising.
Abstract submissions for presentation in the workshop can be done through
CMT web page: https://cmt3.research.microsoft.com/INPAINTING2018/. The
abstract papers must have maximum 4 pages length plus references. Authors
have to use this template
<https://www.springer.com/gp/authors-editors/book-authors-editors/manuscript-preparation/5636>.
Contributions will be published within a volume in this series:
http://www.springer.com/series/15602. Accepted papers will present their
results in the satellite workshop and extended versions will be published
within CIML volume. We organize a TPAMI Special Issue
<http://chalearnlap.cvc.uab.es/special-issue/30/description/> on the
topic and extended versions of best satellite event papers will be invited
to contribute.
The workshop is a *FREE-REGISTRATION EVENT*, open to everyone, and take
place at *Holiday Inn Munich – City Centre, *Hochstrasse 3, 81669 München,
Germany, at just 130m of main ECCV venue. You can check the place in google
map here <https://goo.gl/maps/QC89aCyNiQT2>.
*References:*
[1] V. Jain and S. Seung, “Natural image denoising with convolutional
networks,” in Advances in Neural Information Processing Systems, 2009, pp.
769–776.
[2] L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural
network for image deconvolution,” in Advances in Neural Information
Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N.
D. Lawrence, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2014,
pp. 1790–1798.
[3] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using
deep convolutional networks,” IEEE transactions on pattern analysis and
machine intelligence, vol. 38, no. 2, pp. 295–307, 2016.
[4] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep
neural networks,” in Advances in Neural Information Processing Systems,
2012, pp. 341–349.
[5] A. Newson, A. Almansa, M. Fradet, Y. Gousseau, and P. P´erez, “Video
inpainting of complex scenes,” SIAM Journal on Imaging Sciences, vol. 7,
no. 4, pp. 1993–2019, 2014.
[6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S.
Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in
Advances in neural information processing systems, 2014, pp. 2672–2680.
[7] D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. Efros,
“Context encoders: Feature learning by inpainting,” in Computer Vision and
Pattern Recognition (CVPR), 2016.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), June 2016.
[9] X.-J. Mao, C. Shen, and Y.-B. Yang, “Image Restoration Using
Convolutional Auto-encoders with Symmetric Skip Connections,” ArXiv
e-prints, Jun. 2016.
--
*Dr. Sergio Escalera Guerrero*Head of Human Pose Recovery and Behavior
Analysis Lab
Project Manager at the Computer Vision Center
Director of ChaLearn Challenges in Machine Learning
Associate professor at University of Barcelona / Universitat Oberta de
Catalunya / Aalborg Univ. /
Dalhousie University
Email: sergio.escalera.guerrero at gmail.com / Webpage:
http://www.sergioescalera.com/ <http://www.maia.ub.es/~sergio/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20180623/0d676671/attachment.html>
More information about the visionlist
mailing list