<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><div id="Scope" style="caret-color: rgb(48, 48, 48); color: rgb(48, 48, 48); font-family: tahoma, verdana, arial, helvetica, sans-serif; letter-spacing: 0.10000000149011612px; text-align: justify;" class=""><p class="">The International Conference on Image Processing and Vision Engineering (IMPROVE) 2021 will be virtual from 28-30 April 2021.</p><div class="">The deadline for paper submission is January 26th.</div><div class=""><br class=""></div><div class="">More information at <a href="http://improve.scitevents.org/" target="_blank" data-saferedirecturl="https://www.google.com/url?q=http://improve.scitevents.org&source=gmail&ust=1610694321738000&usg=AFQjCNHNuHyPQ3F7OfNf0SYRYeiOU9DmIQ" style="caret-color: rgb(255, 255, 255); color: rgb(17, 124, 231); font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 14px; letter-spacing: normal; text-align: start;" class="">www.improve.scitevents.<wbr class="">org</a></div><p class="">IMPROVE is a comprehensive conference of academic and technical nature, focused on image processing and computer vision practical applications. It brings together researchers, engineers and practitioners working either in fundamental areas of image processing, developing new methods and techniques, including innovative machine learning approaches, as well as multimedia communications technology and applications of image processing and artificial vision in diverse areas. </p><p class="">The conference accepts, after a strict peer review process, papers describing original work in any of the areas listed below. Accepted papers, presented at the conference by one of the authors, will be published in the proceedings of IMPROVE with an ISBN. Acceptance will be based on quality, relevance and originality. There will be both oral and poster sessions.</p><p class="">Special sessions, dedicated to case-studies as well as technical tutorials, focusing on particular technical/scientific topics, are also envisaged. All those interested in presenting product demonstrations or methodologies are invited to contact the conference secretariat. </p><div class=""><br class="webkit-block-placeholder"></div></div><div id="Conference_Areas" style="caret-color: rgb(48, 48, 48); color: rgb(48, 48, 48); font-family: tahoma, verdana, arial, helvetica, sans-serif; letter-spacing: 0.10000000149011612px; text-align: justify;" class=""><a name="conference_areas" style="color: rgb(75, 91, 113);" class=""></a><h2 class="dotted" style="font-size: 11pt; padding: 0px; border-bottom-width: 1px; border-bottom-style: dotted; border-bottom-color: rgb(48, 48, 48);">CONFERENCE AREAS</h2><p class=""><span id="ctl00_pageContent_ctl02_lblinfoConference_Areas" class="">Each of these topic areas is expanded below but the sub-topics list is not exhaustive. Papers may address one or more of the listed sub-topics, although authors should not feel limited by them. Unlisted but related sub-topics are also acceptable, provided they fit in one of the following main topic areas: </span></p><div id="ctl00_pageContent_ctl02_divTopics" class=""><div id="ctl00_pageContent_ctl02_divAreas" class=""><span id="ctl00_pageContent_ctl02_lbl967" class=""><b class="">1</b>. FUNDAMENTALS</span><br class=""><span id="ctl00_pageContent_ctl02_lbl968" class=""><b class="">2</b>. METHODS AND TECHNIQUES</span><br class=""><span id="ctl00_pageContent_ctl02_lbl670" class=""><b class="">3</b>. MACHINE LEARNING</span><br class=""><span id="ctl00_pageContent_ctl02_lbl82" class=""><b class="">4</b>. MULTIMEDIA COMMUNICATIONS</span><br class=""><span id="ctl00_pageContent_ctl02_lbl978" class=""><b class="">5</b>. IMAGING</span><br class=""><span id="ctl00_pageContent_ctl02_lbl181" class=""><b class="">6</b>. APPLICATIONS</span><br class=""></div><br class=""><br class=""></div><a name="A1" style="color: rgb(75, 91, 113);" class=""></a><h3 style="font-size: 10pt; margin: 0px; padding: 0px;" class=""><span id="ctl00_pageContent_ctl02_rptArea_ctl00_lblArea" class="">AREA 1: FUNDAMENTALS</span></h3><br class=""><ul style="margin-top: 0px;" class=""><li style="list-style: square; margin-left: -10px;" class="">Image Formation and Sensors</li><li style="list-style: square; margin-left: -10px;" class="">Perception Engineering</li><li style="list-style: square; margin-left: -10px;" class="">High-speed Computer Vision</li><li style="list-style: square; margin-left: -10px;" class="">Image Fusion</li><li style="list-style: square; margin-left: -10px;" class="">Digital Image Enhancement</li><li style="list-style: square; margin-left: -10px;" class="">Image Restoration</li><li style="list-style: square; margin-left: -10px;" class="">Image Analysis and Scene Understanding</li><li style="list-style: square; margin-left: -10px;" class="">Image Retrieval</li><li style="list-style: square; margin-left: -10px;" class="">Motion, Tracking and 3D Vision</li><li style="list-style: square; margin-left: -10px;" class="">Video Understanding</li></ul><a name="A2" style="color: rgb(75, 91, 113);" class=""></a><h3 style="font-size: 10pt; margin: 0px; padding: 0px;" class=""><span id="ctl00_pageContent_ctl02_rptArea_ctl01_lblArea" class="">AREA 2: METHODS AND TECHNIQUES</span></h3><br class=""><ul style="margin-top: 0px;" class=""><li style="list-style: square; margin-left: -10px;" class="">Digital Filtering</li><li style="list-style: square; margin-left: -10px;" class="">Computer Vision Algorithms</li><li style="list-style: square; margin-left: -10px;" class="">Spatial Domain Techniques</li><li style="list-style: square; margin-left: -10px;" class="">Frequency Domain Techniques</li><li style="list-style: square; margin-left: -10px;" class="">Algebraic Methods</li><li style="list-style: square; margin-left: -10px;" class="">Image Segmentation</li><li style="list-style: square; margin-left: -10px;" class="">Motion Detection</li><li style="list-style: square; margin-left: -10px;" class="">Quality Enhancement</li><li style="list-style: square; margin-left: -10px;" class="">Simulation and Software Tools</li></ul><a name="A3" style="color: rgb(75, 91, 113);" class=""></a><h3 style="font-size: 10pt; margin: 0px; padding: 0px;" class=""><span id="ctl00_pageContent_ctl02_rptArea_ctl02_lblArea" class="">AREA 3: MACHINE LEARNING</span></h3><br class=""><ul style="margin-top: 0px;" class=""><li style="list-style: square; margin-left: -10px;" class="">Deep Learning and Neural Networks</li><li style="list-style: square; margin-left: -10px;" class="">Adversarial Learning and GANs</li><li style="list-style: square; margin-left: -10px;" class="">Backdoor Definition and Detection</li><li style="list-style: square; margin-left: -10px;" class="">Autoencoders and Representation Learning</li><li style="list-style: square; margin-left: -10px;" class="">Bayesian Deep Learning</li><li style="list-style: square; margin-left: -10px;" class="">Data Mining</li><li style="list-style: square; margin-left: -10px;" class="">Pattern Recognition</li><li style="list-style: square; margin-left: -10px;" class="">Classification and Clustering</li><li style="list-style: square; margin-left: -10px;" class="">Texture Analysis and Classification</li><li style="list-style: square; margin-left: -10px;" class="">Face Recognition</li><li style="list-style: square; margin-left: -10px;" class="">Action Recognition</li><li style="list-style: square; margin-left: -10px;" class="">Model Optimization</li></ul><a name="A4" style="color: rgb(75, 91, 113);" class=""></a><h3 style="font-size: 10pt; margin: 0px; padding: 0px;" class=""><span id="ctl00_pageContent_ctl02_rptArea_ctl03_lblArea" class="">AREA 4: MULTIMEDIA COMMUNICATIONS</span></h3><br class=""><ul style="margin-top: 0px;" class=""><li style="list-style: square; margin-left: -10px;" class="">Image Compression and Multimedia Coding</li><li style="list-style: square; margin-left: -10px;" class="">Multimedia over Networks</li><li style="list-style: square; margin-left: -10px;" class="">Multimedia System Design</li><li style="list-style: square; margin-left: -10px;" class="">Social Media Analysis</li><li style="list-style: square; margin-left: -10px;" class="">Wearable Multimedia</li><li style="list-style: square; margin-left: -10px;" class="">Interactive Visual Systems</li><li style="list-style: square; margin-left: -10px;" class="">Video Indexing and Annotation</li><li style="list-style: square; margin-left: -10px;" class="">Video and Image Security</li><li style="list-style: square; margin-left: -10px;" class="">Intrusion Detection</li><li style="list-style: square; margin-left: -10px;" class="">Watermarking and Steganography </li></ul><a name="A5" style="color: rgb(75, 91, 113);" class=""></a><h3 style="font-size: 10pt; margin: 0px; padding: 0px;" class=""><span id="ctl00_pageContent_ctl02_rptArea_ctl04_lblArea" class="">AREA 5: IMAGING</span></h3><br class=""><ul style="margin-top: 0px;" class=""><li style="list-style: square; margin-left: -10px;" class="">Digital Imaging and Rendering</li><li style="list-style: square; margin-left: -10px;" class="">Optical Imaging</li><li style="list-style: square; margin-left: -10px;" class="">Medical Imaging</li><li style="list-style: square; margin-left: -10px;" class="">Magnetic Resonance Imaging</li><li style="list-style: square; margin-left: -10px;" class="">Neuroimaging</li><li style="list-style: square; margin-left: -10px;" class="">Functional Imaging</li><li style="list-style: square; margin-left: -10px;" class="">Radar Imaging</li><li style="list-style: square; margin-left: -10px;" class="">Hyperspectral Imaging</li></ul><a name="A6" style="color: rgb(75, 91, 113);" class=""></a><h3 style="font-size: 10pt; margin: 0px; padding: 0px;" class=""><span id="ctl00_pageContent_ctl02_rptArea_ctl05_lblArea" class="">AREA 6: APPLICATIONS</span></h3><br class=""><ul style="margin-top: 0px;" class=""><li style="list-style: square; margin-left: -10px;" class="">Biometrics</li><li style="list-style: square; margin-left: -10px;" class="">Remote Sensing</li><li style="list-style: square; margin-left: -10px;" class="">Weather and Climate Modeling</li><li style="list-style: square; margin-left: -10px;" class="">Traffic Control and Autonomous Vehicles</li><li style="list-style: square; margin-left: -10px;" class="">Video Surveillance and Event Detection</li><li style="list-style: square; margin-left: -10px;" class="">Crowds Analysis</li><li style="list-style: square; margin-left: -10px;" class="">Image Forensics and Security</li><li style="list-style: square; margin-left: -10px;" class="">Robotic Vision Engineering</li><li style="list-style: square; margin-left: -10px;" class="">Virtual and Augmented Reality Applications</li><li style="list-style: square; margin-left: -10px;" class="">Industrial Quality Control</li><li style="list-style: square; margin-left: -10px;" class="">Military Applications</li></ul></div><div id="ctl00_pageContent_ctl03_Keynotemini" style="caret-color: rgb(48, 48, 48); color: rgb(48, 48, 48); font-family: tahoma, verdana, arial, helvetica, sans-serif; letter-spacing: 0.10000000149011612px; text-align: justify;" class=""><a id="ctl00_pageContent_ctl03_lnkKeynotes" name="keynote_speakers" style="color: rgb(75, 91, 113);" class=""></a><h2 id="ctl00_pageContent_ctl03_divKeynotesTitle" class="dotted" style="font-size: 11pt; padding: 0px; border-bottom-width: 1px; border-bottom-style: dotted; border-bottom-color: rgb(48, 48, 48);">KEYNOTE SPEAKERS</h2><div id="ctl00_pageContent_ctl03_divkeynotes" class=""><a href="http://improve.scitevents.org/KeynoteSpeakers.aspx#1" id="ctl00_pageContent_ctl03_lblChairName0" style="color: rgb(75, 91, 113); text-decoration: none;" class=""><strong class="">Matthias Niessner</strong>, </a>Technical University of Munich, Germany<br class=""><a href="http://improve.scitevents.org/KeynoteSpeakers.aspx#2" id="ctl00_pageContent_ctl03_lblChairName1" style="color: rgb(75, 91, 113); text-decoration: none;" class=""><strong class="">Luisa Verdoliva</strong>, </a>University of Naples Federico II, Italy<br class=""></div></div><div id="Submition_GuidelinesCFP" style="caret-color: rgb(48, 48, 48); color: rgb(48, 48, 48); font-family: tahoma, verdana, arial, helvetica, sans-serif; letter-spacing: 0.10000000149011612px; text-align: justify;" class=""><h2 id="ctl00_pageContent_ctl04_divTitle" class="dotted" style="font-size: 11pt; padding: 0px; border-bottom-width: 1px; border-bottom-style: dotted; border-bottom-color: rgb(48, 48, 48);"><a name="paper_submission" style="color: rgb(75, 91, 113);" class=""></a>PAPER SUBMISSION</h2><div class=""><span id="ctl00_pageContent_ctl04_lblSubmitionGuideLines" class=""><p class="">Authors can submit their work in the form of a complete paper or an abstract. Complete papers can be submitted as a Regular Paper, representing completed and validated research, or as a Position Paper, portraying a short report of work in progress or an arguable opinion about an issue discussing ideas, facts, situations, methods, procedures or results of scientific research focused on one of the conference topic areas. <br class=""><br class="">Authors should submit a paper in English, carefully checked for correct grammar and spelling, addressing one or several of the conference areas or topics. Each paper should clearly indicate the nature of its technical/scientific contribution, and the problems, domains or environments to which it is applicable. To facilitate the double-blind paper evaluation method, authors are kindly requested to produce and provide the paper WITHOUT any reference to any of the authors, including the authors’ personal details, the acknowledgments section of the paper and any other reference that may disclose the authors’ identity. <br class=""><br class="">When submitting a complete paper please note that only original papers should be submitted. Authors are advised to read <a href="http://improve.scitevents.org/NormsPlagiarism.aspx" style="color: rgb(75, 91, 113); text-decoration: none;" class="">INSTICC's ethical norms regarding plagiarism and self-plagiarism</a> thoroughly before submitting and must make sure that their submissions do not substantially overlap work which has been published elsewhere or simultaneously submitted to a journal or another conference with proceedings. Papers that contain any form of plagiarism will be rejected without reviews. <br class=""><br class="">All papers must be submitted through the online submission platform <a href="http://www.insticc.org/primoris" target="_blank" style="color: rgb(75, 91, 113); text-decoration: none;" class="">PRIMORIS</a> and should follow the instructions and templates that can be found under <a href="http://improve.scitevents.org/Guidelines.aspx" target="_blank" style="color: rgb(75, 91, 113); text-decoration: none;" class="">Guidelines and Templates</a>. After the paper submission has been successfully completed, authors will receive an automatic confirmation e-mail. </p><div class=""><br class=""></div></span></div></div></div></body></html>