<div dir="ltr"><p style="margin-bottom:0.1in;line-height:15.6px">10th Workshop on Human Behavior Understanding (<span class="gmail-il">HBU</span>)<br>In conjunction with International Conference on Computer Vision (ICCV) 2019<br>27 October 2019, Seoul, Korea<br><br>Focus Theme: <strong>Generating, Forging and Detecting Fake Human Behavioral Data</strong><br><a href="https://project.inria.fr/whbu/" target="_blank">https://project.inria.fr/whbu/</a></p><p style="margin-bottom:0.1in;line-height:15.6px">******************************<br><br>CALL FOR PAPERS<br><br>As in many other computer vision tasks, deep learning has brought revolutionary advances in human behaviour understanding from visual data. Deep models are now extremely effective not only in detecting and analyzing human faces, bodies and collective activities but also in generating realistic human-like behavioral data. From full-body deepfakes to AI-based translation dubbing, deep networks can now synthesize images and videos of humans such as they are virtually indistinguishable from real ones. The workshop will focus on recent advances and novel methodologies for generating human behaviour data, with special emphasis on approaches for forging images and videos depicting real-looking human faces and/or full bodies and on algorithms for detecting fake human-like visual data.</p><p style="margin-bottom:0.1in;line-height:15.6px">The <span class="gmail-il">HBU</span> workshops, organized since 2010 as satellite to ICPR’10, AMI’11, IROS’12, ACM Multimedia’13, ECCV’14 and UBICOMP’15, ACM Multimedia’16, FG’18, ECCV’18 Conferences, aim to inspect developments in areas where smarter computers that can sense human behavior. These events have a unique aspect of fostering cross-pollination of different disciplines, bringing together researchers of mobile and ubiquitous computing, computer vision, multimedia, robotics, HCI, artificial intelligence, pattern recognition, interaction design, ambient intelligence, and psychology. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area ensure that the <span class="gmail-il">HBU</span> Workshops provide a timely and relevant discussion and dissemination platform.</p><p style="margin-bottom:0.1in;line-height:15.6px">Each edition of the <span class="gmail-il">HBU</span> workshop had a different focus theme, dealing with a newly emerging topic or question in the automatic analysis of human behavior. The focus theme of this year is of high interest for computer vision researchers: <strong>Generating, Forging and Detecting Fake Human Behavioral Data</strong>. The automatic generation of visual contents is currently a very hot topic in the community. With this edition of the <span class="gmail-il">HBU</span> workshops, we attempt to foster research on how to generate visual data (still images and videos) describing human behavior both from the applicative and methodological points of view.</p><p style="margin-bottom:0.1in;line-height:15.6px"><br><br></p><p style="margin-bottom:0.1in;line-height:15.6px">******************************<br><br>TOPICS<br><br>ICCV’2019 <span class="gmail-il">HBU</span> workshop, in addition to covering the main themes of human behavior understanding, deals with generating human behavior data, with special <strong>emphasis on methodologies and approaches for forging images and videos depicting real-looking human faces </strong>and/or full bodies and on algorithms for detecting fake human-like visual data. Contributions based on deep neural architectures are welcome, as well as methods based on other techniques (e.g. parametric models). These contributions could address the following topics:</p><p style="margin-bottom:0.1in;line-height:15.6px"><b>Human Behavior Analysis Systems</b></p><ul><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Action and activity recognition</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Affect analysis</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Face analysis</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Gaze, attention and saliency</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Gestures and haptic interaction</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Social signal processing</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Voice and speech analysis</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Theoretical frameworks of behavior analysis</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Data collection, annotation, and benchmarking</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">User studies and human factors</p></li></ul><p style="margin-bottom:0.1in;line-height:15.6px"><b>Generating Visual data of Human Behavior</b></p><ul><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Methods for face synthesis and modification of facial attributes (e.g. age, expression).</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Approaches for generating human bodies and altering their properties (e.g. 3D pose, clothes).</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Techniques for forging human-like behavioral data</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Methodologies for counteracting adversarial attacks.</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Techniques for synthesizing visual data depicting collective human behaviour.</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Novel deep generative models for sequence-like data generation.</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Approaches to synthesize multi-modal human behavioral data.</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Applications (e.g. surveillance, entertainment, autonomous driving, fashion, robotics).</p></li></ul><p style="margin-bottom:0.1in;line-height:15.6px"><br><br>Papers must be submitted online through the CMT submission system at:<br><a href="https://easychair.org/conferences/?conf=hbu2019" target="_blank">https://easychair.org/conferences/?conf=hbu2019</a><br>and will be double-blind peer reviewed by at least two reviewers.<br>Submissions should conform to the ICCV 2019 proceedings style.</p><p style="margin-bottom:0.1in;line-height:15.6px">We expect two kind of submissions:</p><ul><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Full papers of new contributions (8 pages NOT including references)</p></li><li style="margin-left:15px"><p style="margin-bottom:0.1in;line-height:15.6px">Short papers describing incremental/preliminary work (2 pages NOT including references)</p></li></ul><p style="margin-bottom:0.1in;line-height:15.6px"><br>More info at: <a href="https://project.inria.fr/whbu/" target="_blank">https://project.inria.fr/whbu/</a><br><br>******************************<br>IMPORTANT DATES<br>Regular Paper Submission: <strong>July 12th, 2019 (EXTENDED)</strong><br>Extended Abstract Submission: <strong>July 15th, 2019</strong><br>Notification of Acceptance: <strong>July 31st, 2019</strong><br>Camera-Ready: <strong>August 15th, 2019</strong><br><br>******************************<br>INVITED SPEAKERS</p><p style="margin-bottom:0in;line-height:15.6px"><strong>Cristian Sminchisescu</strong>, Google & Lund University, DE<br><b>Hao Li</b>, University of Southern California, USA<br><br><br>******************************<br>ORGANIZERS:<br>Xavier Alameda-Pineda, Inria, FR.<br>Xiaoming Liu, Michigan State University, USA.<br>Elisa Ricci, FBK & University of Trento, IT.<br>Albert Ali Salah, Boğaziçi University, TR & Utrecht University, NL.<br>Nicu Sebe, University of Trento, IT.<br>Sergey Tulyakov, Snap Research, USA.</p></div>
<br>
<div style="font-family:Arial,Helvetica,sans-serif"><font face="Arial, Helvetica, sans-serif" size="1">--</font></div><div><div><font size="1"><font face="Arial, Helvetica, sans-serif">Le informazioni contenute nella presente comunicazione sono di natura </font><span style="font-family:Arial,Helvetica,sans-serif">privata e come tali sono da considerarsi riservate ed indirizzate </span><span style="font-family:Arial,Helvetica,sans-serif">esclusivamente ai destinatari indicati e per le finalità strettamente </span><span style="font-family:Arial,Helvetica,sans-serif">legate al relativo contenuto. Se avete ricevuto questo messaggio per </span><span style="font-family:Arial,Helvetica,sans-serif">errore, vi preghiamo di eliminarlo e di inviare una comunicazione </span><span style="font-family:Arial,Helvetica,sans-serif">all’indirizzo e-mail del mittente.</span></font></div></div><div style="font-family:Arial,Helvetica,sans-serif"><font face="Arial, Helvetica, sans-serif" size="1">--</font></div><div style="font-family:Arial,Helvetica,sans-serif"><font size="1">The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material.</font></div>