[visionlist] FG 2020 - call for workshop papers (reminder)

Daniel Acevedo dacevedo at dc.uba.ar
Thu Jan 9 18:07:03 -04 2020


* DEADLINE APPROACHING *



Accepted workshops at FG 2020:

- 1st Workshop on Applied Multimodal Affect Recognition (AMAR)

- Affect Recognition in-the-wild: Uni/Multi-Modal Analysis & VA-AU-Expression Challenges

- Privacy-aware Computer Vision (PaCV)

- Faces and Gestures in E-health and Welfare 

- The Third Facial Micro-Expressions Grand Challenge (MEGC): New Learning Methods for Spotting and Recognition

- International Workshop on Automated Assessment for Pain (AAP 2020)


See information below.


** 1st Workshop on Applied Multimodal Affect Recognition (AMAR 2020) **

Organizers: 

Shaun Canavan, Tempestt Neal, Marvin Andujar, and Lijun Yin

Website: 

http://www.csee.usf.edu/~tjneal/AMAR2020/index.html <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=9adf526bd6&e=964e78ed96>.

Paper Submission Deadline: February 1, 2020 

Abstract:

Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person's emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multimodal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop provides a forum for the exchange of ideas on future direction, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices.



** Affect Recognition in-the-wild: Uni/Multi-Modal Analysis & VA-AU-Expression Challenges **

Organizers: 

Stefanos Zafeiriou, Dimitrios Kollias, and Attila Schulc 

Website: 

https://ibug.doc.ic.ac.uk/resources/affect-recognition-wild-unimulti-modal-analysis-va/ <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=133354f069&e=964e78ed96>
Paper Submission Deadline: February 1, 2020 

 
Abstract:

This workshop aims at advancing the state-of-the-art in the problem of analysis of human affective behavior in-the-wild. Representing human emotions has been a basic topic of research. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. Discrete emotion representation can also be described in terms of the Facial Action Coding System model, in which all possible facial actions are described in terms of Action Units (AUs). Finally, the dimensional model of affect has been proposed as a means to distinguish between subtly different displays of affect and encode small changes in the intensity of each emotion on a continuous scale. The 2-D Valence and Arousal Space (VA-Space) is the most usual dimensional emotion representation; valence shows how positive or negative an emotional state is, whilst arousal shows how passive or active it is. The workshop is composed of the following: At first, it contains three Challenges, which are based, for the first time, on the same database; these target (i) dimensional affect recognition (in terms of valence and arousal), (ii) categorical affect classification (in terms of the seven basic emotions) and (iii) facial action unit detection, in -the-wild. These Challenges will produce a significant step forward when compared to previous events. In particular, they use the Aff-Wild2, the first comprehensive benchmark for all three affect recognition tasks in-the-wild. In addition, the Workshop does not only focus on the three Challenges. It will solicit any original paper on databases, benchmarks and technical contributions related to affect recognition, using audio, visual or other modalities (e.g., EEG), in unconstrained conditions. Either uni-modal, or multi-modal approaches will be considered. It would be of particular interest to see methodologies that study detection of action units based on audio data.



** Privacy-aware Computer Vision (PaCV 2020) **

Organizers: 

Albert Clapés, Computer Vision Center (Universitat Autònoma de Barcelona), aclapes at cvc.uab.es <mailto:aclapes at cvc.uab.es>
Carla Morral, Universitat de Barcelona, carla.morral at gmail.com <mailto:carla.morral at gmail.com>
Julio C. S. Jacques Junior, Universitat Oberta de Catalunya & Computer Vision Center (Universitat Autònoma de Barcelona), juliojj at gmail.com <mailto:juliojj at gmail.com>
Sergio Escalera,  Universitat de Barcelona & Computer Vision Center (Universitat Autònoma de Barcelona), sergio at maia.ub.es <mailto:sergio at maia.ub.es>
Website: http://chalearnlap.cvc.uab.es/workshop/35/description <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=be1ce57350&e=964e78ed96>
Paper Submission Deadline: February 15, 2020 

 
Abstract:

Preserving people’s privacy is a crucial issue faced by many computer vision applications. While exploiting video data from RGB cameras has been proven successful in many human analysis scenarios, it may come at the higher cost of compromising observed individual’s sensitive data. This negatively affects the popularity -- and hence the deployment -- of visual information systems despite their enormous potential to help people in their everyday life. Privacy-aware systems need to minimize the amount of potentially sensitive information about observed subjects that is being collected and/or handled through their pipelines while still achieving a reliable performance. We aim to compile latest efforts and research advances from the scientific community in all aspects of privacy in computer vision/pattern recognition algorithms at data collection, learning, and inference stages. In addition, we are organizing a competition associated to this workshop on identity-preserving human detection (please refer to (http://chalearnlap.cvc.uab.es/challenge/34/description <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=58a3e36a7e&e=964e78ed96>).



** Faces and Gestures in E-health and Welfare (FaGEW) **

Organizers: Cristina Palmero, Sergio Escalera, Maria Inés Torres, Anna Esposito, Alexa Moseguí

Website: http://chalearnlap.cvc.uab.es/workshop/36/description/ <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=6e517ca072&e=964e78ed96>
Paper Submission Deadline: February 1, 2020 

Abstract: The Faces and Gestures in E-health and Welfare workshop aims to provide a common venue for multidisciplinary researchers and practitioners of this area to share their latest approaches and findings, as well as to discuss the current challenges of machine learning and computer vision based e-health and welfare applications. The focus is on the employment of single or multi-modal face, gesture and pose analysis. We expect this workshop to increase the visibility and importance of this area, and contribute, in the short term, in pushing the state of the art in the automatic analysis of human behaviors for health and wellbeing applications.



** The Third Facial Micro-Expressions Grand Challenge (MEGC): New Learning Methods for Spotting and Recognition **

Organizers: Su-Jing Wang, Moi Hoon Yap, John See, Xiaopeng Hong

Website:  http://megc2020.psych.ac.cn:81/ <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=80aa718db4&e=964e78ed96>
Paper Submission Deadline: January 31, 2020 

 
Abstract: Micro-facial expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, most likely in a high-stakes environment. Computational analysis and automation of tasks on micro expressions is an emerging area in face research, with a strong interest appearing as recent as 2014. Only recently, the availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to advance further from the computational aspect. CAS(ME)2 and SAMM Long Videos are two facial macro- and micro- expression databases which contain long video sequences. While much research has been done on short videos, there has been not many attempts to spot micro-expressions on long videos. This workshop is organized with the aim of promoting interactions between researchers and scholars from within this niche area of research, and also those from broader, general areas of computer vision and psychology research. This workshop has two main agenda: 1) To organize the third Grand Challenge for facial micro-expression research, involving spotting macro- and micro-expression on long videos in CAS(ME)2 and SAMM; and 2) To solicit original works that address a variety of modern challenges of ME research such as ME recognition by self-supervised learning.



* International Workshop on Automated Assessment for Pain (AAP 2020) *

Organizers: Steffen Walter, Zakia Hammal, Nadia Berthouze

Website: http://aap-2020.net <https://gmail.us4.list-manage.com/track/click?u=e44e911ba2f0930890c88b590&id=2058f8634a&e=964e78ed96>
Paper Submission Deadline: January 10, 2020 

 
Abstract: Pain typically is measured by patient self-report, but self-reported pain is difficult to interpret and may be impaired or in some circumstances not possible to obtain. For instance, in patients with restricted verbal abilities such as neonates, young children, and in patients with certain neurological or psychiatric impairments (e.g., dementia). Additionally, the subjectively experienced pain may be partly or even completely unrelated to the somatic pathology of tissue damage and other disorders. Therefore, the standard self-assessment of pain does not always allow for an objective and reliable assessment of the quality and intensity of pain. Given individual differences among patients, their families, and healthcare providers, pain often is poorly assessed, underestimated, and inadequately treated. To improve assessment of pain, objective, valid, and efficient assessment of the onset, intensity, and pattern of occurrence of pain is necessary. To address these needs, several efforts have been made in machine learning and computer vision community for automatic and objective assessment of pain from video as a powerful alternative to self-reported pain.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200109/447bea45/attachment.html>


More information about the visionlist mailing list