[visionlist] 2nd CfP: New challenge & workshop on multimodal computing (ACL 2018)
Louis-Philippe Morency
morency at cs.cmu.edu
Mon Mar 26 13:00:15 -05 2018
First Workshop on Computational Modeling of Human Multimodal Language
Co-located with ACL 2018 conference, Melbourne, Australia
Date: 20 July 2018 (submission deadline: April 20, 2018)
http://multicomp.cs.cmu.edu/acl2018multimodalworkshop
The first ACL 2018 Workshop on Computational Modeling of Human Multimodal Language is offering a unique opportunity for interdisciplinary researchers to study and model interactions between language, vision and voice. This workshop is also a Grand Challenge which introduces new shared tasks building upon the recently release CMU-MOSEI dataset: more than 23,000 annotated videos from more than 1,000 different speakers and more than 200 topics. Two shared tasks are presented: (1) multimodal sentiment analysis and (2) multimodal emotion recognition. The workshop also introduces the CMU Multimodal Data SDK to the scientific community for conveniently loading large-scale multimodal datasets into proper TensorFlow and PyTorch formats.
The focus of this workshop is on joint analysis of language (spoken text), vision (gestures and expressions) and acoustic (paraverbal) modalities. We seek the following types of submissions:
* Grand challenge papers: Papers summarizing the research effort with the CMU-MOSEI shared tasks on multimodal sentiment analysis and/or emotion recognition. Grand challenge papers are 8 pages, including references.
* Full and short papers: These papers are presenting substantial, original and unpublished research on human multimodal language. Full papers are up to 8 pages including references and short papers are 4 pages + 1 page for references.
Topics of interest for full and short papers include:
* Multimodal sentiment analysis
* Multimodal emotion recognition
* Multimodal affective computing
* Multimodal speaker traits recognition
* Dyadic multimodal interactions
* Multimodal dialogue modeling
* Cognitive modeling and multimodal interaction
* Statistical analysis of human multimodal language
Submission must be formatted according to ACL 2018 style files: http://acl2018.org/call-for-papers/#paper-submission-and-templates
Important Dates
* Grand challenge data release: 18 January 2018
* Grand challenge test set available: 9 March 2018
* Paper deadline [grand challenge, full and short]: 20 April 2018
* Notification of Acceptance: 14 May 2018
* Camera ready: 28 May 2018
* Workshop date and location: 20 July 2018, ACL 2018 Melbourne Australia
Workshop Organizers
Amir Zadeh (Language Technologies Institute, Carnegie Mellon University)
Louis-Philippe Morency (Language Technologies Institute, Carnegie Mellon University)
Paul Pu Liang (Machine Learning Department, Carnegie Mellon University)
Soujanya Poria (Temasek Laboratories, Nanyang Technological University)
Erik Cambria (Temasek Laboratories, Nanyang Technological University)
Stefan Scherer (Institute for Creative Technologies, University of Southern California)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20180326/63a8674e/attachment-0001.html>
More information about the visionlist
mailing list