[visionlist] Fwd: Faces 2019: Second CFP

Adrian Muscat adrian.muscat at um.edu.mt
Thu Jun 13 11:00:12 -04 2019

Apologies for cross postings

Reminder: Deadline for the Faces 2019 workshop is 7th July, 2019

Workshop to be held at the 22nd Nordic Conference on Computational
Linguistics (NoDaLiDa 2019) Conference, Turku University, Turku, Finland,
on 30 September 2019.


Workshop goals
The workshop will  provide a forum to present and discuss current research
focusing on the way human faces are perceived, understood and described by
humans, as well as the way computational models  represent (and therefore
‘understand’) human faces and generate descriptions of faces, or facial
images corresponding to descriptions, for different purposes and

Recent research on multimodal analysis of images and text or the generation
of image descriptions has focussed on general data sets, involving everyday
scenes and objects, or on domain-specific data. While some of these might
contain faces as a subset, we argue that the  specific challenges
associated with faces are not adequately represented. Descriptions of faces
are frequent in human communication, for example when one seeks to identify
an individual or distinguish one person from another. They are also
pervasive in descriptive or narrative text. Depending on the context, they
may focus on physical attributes, or incorporate inferred characteristics
and emotional elements.

The ability to adequately model and describe faces is interesting for a
variety of language technology applications, e.g., conversational agents
and interactive narrative generation, as well as forensic applications in
which faces need to be identified or generated from textual or spoken
descriptions. Such systems would need to process the images associated with
human faces together with their linguistic descriptions, therefore the
research needed to develop them is placed at the interface between vision
and language research, a cross-disciplinary area which has received
considerable attention in recent years, e.g. through the series of
workshops on Vision and Language organised by the European Network on
Integrating Vision and Language (iV&L Net), the 2015–2018 Language and
Vision Workshops, or recently the Workshop on Shortcomings in Vision and

Human faces are being studied by researchers from different research
communities, including those working with vision and language modeling,
natural language generation and understanding, cognitive science, cognitive
psychology, multimodal communication and embodied conversational agents.
The workshop aims to reach out to all these communities to explore the many
different aspects of research on human faces and foster cross-disciplinary

Relevant topics
We are inviting short and long papers of original research, surveys,
position papers, and demos on relevant topics that include, but are not
limited to, the following ones:

- Datasets of facial images and descriptions
- Experimental studies of facial expression understanding by humans
- Discovery of face descriptions in corpora
- Automatic structuring of descriptions and semantic web representations
and databases
- Algorithms for automatic facial description generation
- Emotion recognition by humans
- Emotion recognition, perception and interpretation of face descriptions
- Multimodal automatic emotion recognition from images and text
- Subjectivity in face perception
- Communicative, relational and intentional aspects of head pose and
- Collection and annotation methods for facial descriptions
- Inferential aspects of facial descriptions
- Understanding and description of the human face in different contexts,
including commercial applications, art, forensics, etc.
- Modelling of the human face and facial expressions for embodied
conversational agents
- Generation of facial images from descriptions

Important dates
Paper submission deadline: July 7
Notification of acceptance: July 25
Camera ready Papers: September 1
Workshop Schedule: September 18
Workshop: September 30 (half day)

Submission guidelines
Submission URL: https://easychair.org/conferences/?conf=faces2019

Short paper submissions may consist of up to 4 pages of content, while long
papers may have up to 8 pages of content. References do not count towards
these page limits.

All submissions must follow the NoDaLiDa 2019 style files, which are
available for LaTeX (preferred) and MS Word and can be retrieved from the
following address:


Submissions must be anonymous, i.e. not reveal author(s) on the title page
or through self-references. Papers must be submitted digitally, in PDF, and
uploaded through the online submission system. The authors of accepted
papers will be required to submit a camera-ready version to be included in
the final proceedings. Authors of accepted papers will be notified after
the notification of acceptance with further details.

Accepted papers will be published in the ACL Anthology

Patrizia Paggio, University of Copenhagen and University of Malta
Albert Gatt, University of Malta
Roman Klinger, University of Stuttgart

Programme committee
Adrian Muscat, University of Malta
Andreas Hotho, University of Würzburg
Andrew Hendrickson, Tilburg University
Anja Belz, University of Brighton
Costanza Navarretta, CST, University of Copenhagen
David Hogg, University of Leeds
Diego Frassinelli, University of Stuttgart
Emiel van Miltenburg, Tilburg University and VU Amsterdam
Francesca D’Errico, Roma Tre University
Gerard de Melo, Rutgers University
Gholamreza Anbarjafari, University of Tartu
Isabella Poggi, Roma Tre University
Jan Snajder, University of Zagreb
Michael Tarr, Carnegie Mellon University
Jordi Gonzalez, Universitat Autònoma de Barcelona
Lonneke van der Plas, University of Malta
Paul Buitelaar, National University of Ireland, Galway
Raffaella Bernardi, CiMEC Trento
Sebastian Padó, University of Stuttgart
Spandana Gella, University of Edinburgh

Albert Gatt
Institute of Linguistics and Language Technology
University of Malta

Albert Gatt
Institute of Linguistics and Language Technology
University of Malta
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20190613/e7401d01/attachment.html>

More information about the visionlist mailing list