<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Dear colleagues,<br>
Please find below the Call for Papers for EUVIP 2018. Feel free to
share it with your colleagues and to transmit it through appropriate
channels.<br>
With kind regards,<br>
--Frederic<br>
<br>
<br>
<br>
<span style="font-size:10.0pt" lang="EN-AU">EUVIP 2018<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">7th European Workshop on
Visual Information Processing<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">Tampere, Finland <o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">November 26-28, 2018<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU"><a
class="moz-txt-link-freetext" href="http://www.tut.fi/euvip2018">http://www.tut.fi/euvip2018</a><o:p></o:p></span><span
style="font-size:10.0pt" lang="EN-AU"><o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt" lang="EN-AU">*********************************************************************<o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt">We are delighted to invite you to
submit a paper to the 7th European Workshop on Visual Information
Processing (EUVIP) taking place in <b>Tampere, Finland, 26 - 28
November 2018</b>.<o:p></o:p></span><br>
<span style="font-size:10.0pt">The focus of the EUVIP series of
workshops is on visual information processing, modeling, and
analysis inspired by human and biological visual systems, with
applications to image and video processing and communications. <o:p></o:p></span><span
style="font-size:10.0pt"><o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt" lang="EN-AU">Topics of interest to
EUVIP2018 include, but are not limited to:</span><span
style="font-size:10.0pt"><o:p></o:p></span><br>
<ul type="disc">
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Image and
video compression<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Image
restoration, enhancement and super-resolution<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Video
processing and analytics<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Biometrics,
forensics, and image & video content protection<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Depth map, 3D,
multi-view encoding<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Visual quality
and quality of experience assessment<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Computational
vision models and perceptual-based processing<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Color image
understanding & processing<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Sparse and
redundant visual data representation<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Image and
video data fusion<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Image and
video communication in the cloud<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Visual
substitution for blind and visually impaired<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Display and
quantization of visual signals<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Deep learning
for visual information processing<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Image
processing for autonomous vehicles<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt"> Visual
information processing for AR/VR Systems<o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0
level1 lfo1"> <span style="font-size:10.0pt">
360/omnidirectional image/video processing<o:p></o:p></span></li>
</ul>
<span style="font-size:10.0pt" lang="EN-AU">Special attention will
be devoted to student contributions in EUVIP2018. Regular student
papers, for which the first author is a student, will be
considered for best student paper award.<o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt" lang="EN-AU">We are pleased to
announce the EUVIP 2018 PLENARY SPEAKERS <o:p></o:p></span><span
style="font-size:10.0pt" lang="EN-AU"><o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt" lang="EN-AU">"Consciousness of
Stream: Perceptually Optimizing Global Video"<o:p></o:p></span><br>
<b><span style="font-size:10.0pt" lang="EN-AU">Alan Bovik<o:p></o:p></span></b><br>
<span style="font-size:10.0pt" lang="EN-AU">Laboratory for Image and
Video Engineering (LIVE), <o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">University of Texas at
Austin, USA<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">November, 26, 2018<o:p></o:p></span><span
style="font-size:10.0pt" lang="EN-AU"><o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt" lang="EN-AU">"Sparse Modeling in
Image Processing and Deep Learning"<o:p></o:p></span><br>
<b><span style="font-size:10.0pt" lang="EN-AU">Michael Elad<o:p></o:p></span></b><br>
<span style="font-size:10.0pt" lang="EN-AU">Israel Institute of
Technology, Israel<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">November, 27, 2018<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU"><o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">“Patch-based
regularization in hyperspectral imaging inverse problems”<o:p></o:p></span><br>
<b><span style="font-size:10.0pt" lang="EN-AU">Jose Bioucas-Dias<o:p></o:p></span></b><br>
<span style="font-size:10.0pt" lang="PT-BR">Instituto de
Telecomunicações, Universidade de Lisboa, Portugal<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="PT-BR">November, 27, 2018<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="PT-BR"><o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">“Cross Roads between
Signal Processing, Pattern Recognition and Machine Learning –
Towards Industrial AI Applications”</span><span
style="font-size:10.0pt" lang="EN-GB"><o:p></o:p></span><br>
<b><span style="font-size:10.0pt" lang="EN-AU">Moncef Gabbouj<o:p></o:p></span></b><br>
<span style="font-size:10.0pt" lang="EN-AU">Signal Processing
Laboratory, Tampere University of Technology, Finland<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="EN-AU">November, 28, 2018<o:p></o:p></span><br>
<span style="font-size:10.0pt" lang="PT-BR"><o:p></o:p></span><br>
<b><u><span style="font-size:10.0pt" lang="PT-BR">Paper Submission</span></u></b><span
style="font-size:10.0pt" lang="PT-BR"><o:p></o:p></span><br>
<span style="font-size:10.0pt">Prospective authors are invited to
submit full-length papers, with 4-6 pages of technical content,
figures and references, through the conference website: <a
href="http://www.tut.fi/euvip2018">www.tut.fi/euvip2018</a><o:p></o:p></span><br>
<br>
<span style="font-size:10.0pt" lang="EN-AU"> </span><b><u><span
style="font-size:10.0pt">Important Dates</span></u></b><span
style="font-size:10.0pt" lang="RU"><o:p></o:p></span><br>
<ul type="disc">
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1
level1 lfo2"> <span style="font-size:10.0pt">Submission
Deadline: <span style="color:black">June, 15, 2018</span> <o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1
level1 lfo2"> <span style="font-size:10.0pt">Notification of
Acceptance: 15 August 2018 <o:p></o:p></span></li>
<li class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1
level1 lfo2"> <span style="font-size:10.0pt">Camera Ready
(Final Paper) Submission: 15 September 2018<o:p></o:p></span></li>
</ul>
<span style="font-size:10.0pt">The general sponsor of EUVIP 2018 is
Tampere University of Technology. Technical sponsors are IEEE
Circuits and Systems Society (IEEE CASS) and EURASIP.<o:p></o:p></span><br>
<span style="font-size:10.0pt"></span><br>
<span style="font-size:10.0pt" lang="EN-AU"> </span><b><u><span
style="font-size:10.0pt">Publication</span></u></b><span
style="font-size:10.0pt"><o:p></o:p></span><br>
<span style="font-size:10.0pt">All regular submissions will be
peer-reviewed, and accepted papers will be presented in a
technical session (oral or poster). Regular papers presented at
the conference will be included in the conference proceedings and
published in IEEE Xplore.<br>
<br>
<br>
<br>
</span>
<pre class="moz-signature" cols="72">--
_______________________________________
Frederic Dufaux, Fellow IEEE
Directeur de Recherche CNRS
Laboratoire des Signaux et Systèmes (L2S, UMR 8506)
CNRS - CentraleSupelec - Université Paris-Sud
3, rue Joliot Curie
91192 Gif-sur-Yvette Cedex, FRANCE
email: <a class="moz-txt-link-abbreviated" href="mailto:frederic.dufaux@l2s.centralesupelec.fr">frederic.dufaux@l2s.centralesupelec.fr</a>
tel: +33 1 69 85 17 44
mobile: +33 6 21 33 09 27
<a class="moz-txt-link-freetext" href="http://www.l2s.centralesupelec.fr/perso/Frederic.DUFAUX">http://www.l2s.centralesupelec.fr/perso/Frederic.DUFAUX</a>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Editor-in-Chief
Signal Processing: Image Communication
<a class="moz-txt-link-freetext" href="http://www.journals.elsevier.com/signal-processing-image-communication">http://www.journals.elsevier.com/signal-processing-image-communication</a>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"High Dynamic Range Video<a class="moz-txt-link-rfc2396E" href="http://store.elsevier.com/High-Dynamic-Range-Video/isbn-9780081004128/~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~">"
http://store.elsevier.com/High-Dynamic-Range-Video/isbn-9780081004128/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"</a>Digital Holographic Data Representation and Compression<a class="moz-txt-link-rfc2396E" href="https://www.elsevier.com/books/digital-holographic-data-representation-and-compression/xing/978-0-12-802854-4~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~">"
https://www.elsevier.com/books/digital-holographic-data-representation-and-compression/xing/978-0-12-802854-4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"</a>Emerging Technologies for 3D Video"
<a class="moz-txt-link-freetext" href="http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118355113.html">http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118355113.html</a></pre>
</body>
</html>