<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div dir="ltr" class=""><table border="0" cellpadding="0" cellspacing="0" width="100%" style="border-collapse:collapse;min-width:100%" class=""><tbody class=""><tr class=""><td valign="top" style="padding-top:9px" class=""><table align="left" border="0" cellpadding="0" cellspacing="0" width="100%" style="border-collapse:collapse;max-width:100%;min-width:100%" class=""><tbody class=""><tr class=""><td valign="top" style="word-break:break-word;color:rgb(32,32,32);font-family:Helvetica;font-size:16px;line-height:16px;padding:0px 18px 9px" class=""><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">* DEADLINE APPROACHING *</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><br class=""></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Accepted workshops at FG 2020:</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">- 1st Workshop on Applied Multimodal Affect Recognition (AMAR)</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">- Affect Recognition in-the-wild: Uni/Multi-Modal Analysis & VA-AU-Expression Challenges</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">- Privacy-aware Computer Vision (PaCV)</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">- Faces and Gestures in E-health and Welfare </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">- The Third Facial Micro-Expressions Grand Challenge (MEGC): New Learning Methods for Spotting and Recognition</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">- International Workshop on Automated Assessment for Pain (AAP 2020)</p><br class="">See information below.<br class=""><br class=""><br class=""><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class="">** 1st Workshop on Applied Multimodal Affect Recognition (AMAR 2020) **</strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Organizers: </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Shaun Canavan, Tempestt Neal, Marvin Andujar, and Lijun Yin</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Website: </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=9adf526bd6&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">http://www.csee.usf.edu/~tjneal/AMAR2020/index.html</a>.</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><span style="color:rgb(255,0,0)" class="">Paper Submission Deadline: February 1, 2020 </span></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Abstract:</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person's emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multimodal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop provides a forum for the exchange of ideas on future direction, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices.</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class=""><br class=""></strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class="">** Affect Recognition in-the-wild: Uni/Multi-Modal Analysis & VA-AU-Expression Challenges **</strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Organizers: </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Stefanos Zafeiriou, Dimitrios Kollias, and Attila Schulc </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Website: </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=133354f069&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">https://ibug.doc.ic.ac.uk/resources/affect-recognition-wild-unimulti-modal-analysis-va/</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><span style="color:rgb(255,0,0)" class="">Paper Submission Deadline: February 1, 2020 </span></p><div style="margin:10px 0px;padding:0px;line-height:16px" class=""> <br class=""></div><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Abstract:</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">This workshop aims at advancing the state-of-the-art in the problem of analysis of human affective behavior in-the-wild. Representing human emotions has been a basic topic of research. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. Discrete emotion representation can also be described in terms of the Facial Action Coding System model, in which all possible facial actions are described in terms of Action Units (AUs). Finally, the dimensional model of affect has been proposed as a means to distinguish between subtly different displays of affect and encode small changes in the intensity of each emotion on a continuous scale. The 2-D Valence and Arousal Space (VA-Space) is the most usual dimensional emotion representation; valence shows how positive or negative an emotional state is, whilst arousal shows how passive or active it is. The workshop is composed of the following: At first, it contains three Challenges, which are based, for the first time, on the same database; these target (i) dimensional affect recognition (in terms of valence and arousal), (ii) categorical affect classification (in terms of the seven basic emotions) and (iii) facial action unit detection, in -the-wild. These Challenges will produce a significant step forward when compared to previous events. In particular, they use the Aff-Wild2, the first comprehensive benchmark for all three affect recognition tasks in-the-wild. In addition, the Workshop does not only focus on the three Challenges. It will solicit any original paper on databases, benchmarks and technical contributions related to affect recognition, using audio, visual or other modalities (e.g., EEG), in unconstrained conditions. Either uni-modal, or multi-modal approaches will be considered. It would be of particular interest to see methodologies that study detection of action units based on audio data.</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class=""><br class=""></strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class="">** Privacy-aware Computer Vision (PaCV 2020) **</strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Organizers: </p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Albert Clapés, Computer Vision Center (Universitat Autònoma de Barcelona), <a href="mailto:aclapes@cvc.uab.es" style="color:rgb(0,124,137)" target="_blank" class="">aclapes@cvc.uab.es</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Carla Morral, Universitat de Barcelona, <a href="mailto:carla.morral@gmail.com" style="color:rgb(0,124,137)" target="_blank" class="">carla.morral@gmail.com</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Julio C. S. Jacques Junior, Universitat Oberta de Catalunya & Computer Vision Center (Universitat Autònoma de Barcelona), <a href="mailto:juliojj@gmail.com" style="color:rgb(0,124,137)" target="_blank" class="">juliojj@gmail.com</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Sergio Escalera, Universitat de Barcelona & Computer Vision Center (Universitat Autònoma de Barcelona), <a href="mailto:sergio@maia.ub.es" style="color:rgb(0,124,137)" target="_blank" class="">sergio@maia.ub.es</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Website: <a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=be1ce57350&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">http://chalearnlap.cvc.uab.es/workshop/35/description</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><span style="color:rgb(255,0,0)" class="">Paper Submission Deadline: February 15, 2020 </span></p><div style="margin:10px 0px;padding:0px;line-height:16px" class=""> <br class=""></div><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Abstract:</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Preserving people’s privacy is a crucial issue faced by many computer vision applications. While exploiting video data from RGB cameras has been proven successful in many human analysis scenarios, it may come at the higher cost of compromising observed individual’s sensitive data. This negatively affects the popularity -- and hence the deployment -- of visual information systems despite their enormous potential to help people in their everyday life. Privacy-aware systems need to minimize the amount of potentially sensitive information about observed subjects that is being collected and/or handled through their pipelines while still achieving a reliable performance. We aim to compile latest efforts and research advances from the scientific community in all aspects of privacy in computer vision/pattern recognition algorithms at data collection, learning, and inference stages. In addition, we are organizing a competition associated to this workshop on identity-preserving human detection (please refer to (<a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=58a3e36a7e&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">http://chalearnlap.cvc.uab.es/challenge/34/description</a>).</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class=""><br class=""></strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class="">** Faces and Gestures in E-health and Welfare (FaGEW) **</strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Organizers: Cristina Palmero, Sergio Escalera, Maria Inés Torres, Anna Esposito, Alexa Moseguí</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Website: <a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=6e517ca072&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">http://chalearnlap.cvc.uab.es/workshop/36/description/</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><span style="color:rgb(255,0,0)" class="">Paper Submission Deadline: February 1, 2020 </span></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Abstract: The Faces and Gestures in E-health and Welfare workshop aims to provide a common venue for multidisciplinary researchers and practitioners of this area to share their latest approaches and findings, as well as to discuss the current challenges of machine learning and computer vision based e-health and welfare applications. The focus is on the employment of single or multi-modal face, gesture and pose analysis. We expect this workshop to increase the visibility and importance of this area, and contribute, in the short term, in pushing the state of the art in the automatic analysis of human behaviors for health and wellbeing applications.</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class=""><br class=""></strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class="">** The Third Facial Micro-Expressions Grand Challenge (MEGC): New Learning Methods for Spotting and Recognition **</strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Organizers: Su-Jing Wang, Moi Hoon Yap, John See, Xiaopeng Hong</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Website: <a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=80aa718db4&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">http://megc2020.psych.ac.cn:81/</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><span style="color:rgb(255,0,0)" class="">Paper Submission Deadline: January 31, 2020 </span></p><div style="margin:10px 0px;padding:0px;line-height:16px" class=""> <br class=""></div><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Abstract: Micro-facial expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, most likely in a high-stakes environment. Computational analysis and automation of tasks on micro expressions is an emerging area in face research, with a strong interest appearing as recent as 2014. Only recently, the availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to advance further from the computational aspect. CAS(ME)2 and SAMM Long Videos are two facial macro- and micro- expression databases which contain long video sequences. While much research has been done on short videos, there has been not many attempts to spot micro-expressions on long videos. This workshop is organized with the aim of promoting interactions between researchers and scholars from within this niche area of research, and also those from broader, general areas of computer vision and psychology research. This workshop has two main agenda: 1) To organize the third Grand Challenge for facial micro-expression research, involving spotting macro- and micro-expression on long videos in CAS(ME)2 and SAMM; and 2) To solicit original works that address a variety of modern challenges of ME research such as ME recognition by self-supervised learning.</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class=""><br class=""></strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><strong class="">* International Workshop on Automated Assessment for Pain (AAP 2020) *</strong></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Organizers: Steffen Walter, Zakia Hammal, Nadia Berthouze</p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Website: <a href="https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=2058f8634a&e=964e78ed96" style="color:rgb(0,124,137)" target="_blank" class="">http://aap-2020.net</a></p><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class=""><span style="color:rgb(255,0,0)" class="">Paper Submission Deadline: January 10, 2020 </span></p><div style="margin:10px 0px;padding:0px;line-height:16px" class=""> <br class=""></div><p dir="ltr" style="margin:10px 0px;padding:0px;line-height:16px" class="">Abstract: Pain typically is measured by patient self-report, but self-reported pain is difficult to interpret and may be impaired or in some circumstances not possible to obtain. For instance, in patients with restricted verbal abilities such as neonates, young children, and in patients with certain neurological or psychiatric impairments (e.g., dementia). Additionally, the subjectively experienced pain may be partly or even completely unrelated to the somatic pathology of tissue damage and other disorders. Therefore, the standard self-assessment of pain does not always allow for an objective and reliable assessment of the quality and intensity of pain. Given individual differences among patients, their families, and healthcare providers, pain often is poorly assessed, underestimated, and inadequately treated. To improve assessment of pain, objective, valid, and efficient assessment of the onset, intensity, and pattern of occurrence of pain is necessary. To address these needs, several efforts have been made in machine learning and computer vision community for automatic and objective assessment of pain from video as a powerful alternative to self-reported pain.</p></td></tr></tbody></table></td></tr></tbody></table></div>
</body></html>