<div dir="ltr"><span id="m_-1229265157835652269gmail-docs-internal-guid-9443efd2-7fff-1581-a89f-3034dc2dab1f"><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><b>AGAVE 2023 - Call for Papers</b></span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">----------------------------------------------------------------------------------</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">1st International Workshop on Advances in Gaze Analysis,</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Visual attention and Eye-gaze modelling (AGAVE)</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><a href="https://sites.google.com/view/agave2023" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://sites.google.com/view/agave2023</span></a></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">----------------------------------------------------------------------------------</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">In conjunction with the 22nd International Conference</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">on Image Analysis and Processing (ICIAP 2023)</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">11-15 September 2023, Udine (Italy)</span></p><p dir="ltr" style="text-align:center;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">----------------------------------------------------------------------------------</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Visual attention refers to the human ability to focus on the most relevant parts of a visual scene (e.g. images or videos). Over the past decades, it has been extensively studied, leading to the proposal of numerous computational models aimed at predicting attention allocation on visual stimuli. While these models were initially developed in the field of psychology and neuroscience, the computer vision and pattern recognition community has become increasingly interested in this problem. This interest has been driven by the growing number of potential applications such as image/video compression, autonomous driving, and robotics.</span></p><div><br></div><div><br></div><div><b style="color:rgb(0,0,0);font-family:Arial;font-size:14.6667px;white-space:pre-wrap">TOPICS</b></div><div><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap">The workshop will investigate a range of topics covering (but not limited to) the following:</span></div></span><div><br><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Computational models of visual attention</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Scanpath prediction models on images or videos</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Active vision and its applications</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">(Deep) visual saliency models and their applications</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Eye-tracking technology and its applications</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Human visual perception and its implications for computer vision</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Benchmarking and evaluation of visual attention and gaze analysis models</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Applications of visual attention and gaze analysis in image and video processing, robotics, and human-computer interaction</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Eye-gaze models for biometrics, health, driving, virtual reality, education and social sciences.</span></p></li></ul><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">The workshop aims to bring together researchers and practitioners in the field of computer vision and image processing to exchange ideas and discuss recent advances in visual attention, gaze analysis and modelling, visual saliency, active vision, and deep saliency models. The workshop will provide a platform for researchers to present their work, share experiences, and discuss the challenges and opportunities in the field.</span></p><div><br></div><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><b>SUBMISSION & PUBLICATION</b></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">The paper must be prepared in accordance to the guidelines for authors of the ICIAP2023 main conference, provided by Springer that can be found at: </span><a href="https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines</span></a></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">The maximum number of pages is 12 including references. Submissions will be selected through a peer review process. Each submission will be reviewed by at least two reviewers, considering originality, significance, clarity, soundness, relevance, and technical content. Authors may optionally submit additional material.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Accepted papers will be published in the ICIAP 2023 Conference Proceedings, published by Springer in the Lecture Notes in Computer Science series (LNCS).</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">The paper submission platform is</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><a href="https://cmt3.research.microsoft.com/AGAVE2023" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://cmt3.research.microsoft.com/AGAVE2023</span></a></p><div><br></div><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><b>IMPORTANT DATES</b></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"></p><ul><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Paper submission: 08 July 2023</span></li><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Notification to the authors: 01 August 2023</span></li><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Camera ready paper submission: 20 August 2023</span></li></ul><p></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap"><b>WORKSHOP ORGANIZERS</b></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"></p><ul><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Vittorio Cuculo, University of Modena and Reggio Emilia, Italy</span></li><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Alessandro D’Amelio, University of Milan, Italy</span></li><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Marcella Cornia, University of Modena and Reggio Emilia, Italy</span></li><li style="margin-left:15px"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline;white-space:pre-wrap">Dario Zanca, University of Erlangen-Nuremberg, Germany</span></li></ul><font color="#888888"><p></p></font><font color="#888888"><br></font></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font color="#000000"><b style="font-size:12.8px">Marcella Cornia</b><span style="font-size:12.8px">, PhD</span></font></div><div><font color="#000000"><span style="font-size:12.8px">Tenure-Track Assistant Professor (RTD-B)</span></font></div><div><font color="#000000"><span style="font-size:12.8px">Dipartimento di Educazione e Scienze Umane</span></font></div><div dir="ltr"><font color="#000000"><span style="font-size:12.8px">Università degli Studi di Modena e Reggio Emilia</span><br style="font-size:12.8px"><span style="font-size:12.8px">e-mail: <a href="mailto:marcella.cornia@unimore.it" target="_blank">marcella.cornia@unimore.it</a></span><br style="font-size:12.8px"><span style="font-size:12.8px">phone: +39 059 2058790</span></font><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>