[visionlist] The 6th Workshop on Vision and Language (VL'17): Call for Posters and Demos

Erkut Erdem erkut at cs.hacettepe.edu.tr
Wed Feb 8 02:51:48 EST 2017

The 6th Workshop on Vision and Language (VL’17)
At EACL’17 in Valencia, Spain

Computational vision-language integration is commonly taken to mean the process of associating visual and 
corresponding linguistic pieces of information. Fragments of natural language, in the form of tags, captions, subtitles,
surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating
visual appearance. Labeled images are essential for training object or activity classifiers. Visual data can help
resolve challenges in language processing such as word sense disambiguation, language understanding, machine
translation and speech recognition. Sign language and gestures are languages that require visual interpretation.
Studying language and vision together can also provide new insight into cognition and universal representations of
knowledge and meaning, the focus of researchers in these areas is increasingly turning towards models for grounding language in action and perception. There is growing interest in models that are capable of learning from, and exploiting, multi-modal data, involving constructing semantic representations from both linguistic and visual or
perceptual input.

The 6th Workshop on Vision and Language (VL’17) aims to address all the above, with a particular focus on the
integrated modelling of vision and language. We welcome papers describing original research combining language
and vision. To encourage the sharing of novel and emerging ideas we also welcome papers describing new datasets,
grand challenges, open problems, benchmarks and work in progress as well as survey papers.
Topics of interest include (in alphabetical order), but are not limited to:

* Computational modelling of human vision and language
* Computer graphics generation from text
* Cross-lingual image captioning
* Detection/Segmentation by referring expressions
* Human-computer interaction in virtual worlds
* Human-robot interaction
* Image and video description and summarisation
* Image and video labelling and annotation
* Image and video retrieval
* Language-driven animation
* Machine translation with visual enhancement
* Medical image processing
* Models of distributional semantics involving vision and language
* Multi-modal discourse analysis
* Multi-modal human-computer communication
* Multi-modal machine translation
* Multi-modal temporal and spatial semantics recognition and resolution
* Recognition of narratives in text and video
* Recognition of semantic roles and frames in text, images and video
* Retrieval models across different modalities
* Text-to-image generation
* Visual question answering / visual Turing challenge
* Visually grounded language understanding
* Visual storytelling

Accepted poster submissions will be presented in the form of brief ’teaser’ presentations, followed by a poster presentation during the workshop poster session, and will be published in the VL'17 proceedings. 

Poster Abstract Submission

Abstracts for posters should be up to 2 pages long plus references. Submissions should adhere to the EACL 2017 format (style files available http://eacl2017.org/index.php/calls/call-for-papers <http://eacl2017.org/index.php/calls/call-for-papers>), and should be in PDF format.

Please make your submission via the workshop submission pages: https://www.softconf.com/eacl2017/VL2017 <https://staffmail.brighton.ac.uk/owa/UrlBlockedError.aspx> 

Important Dates

Feb 28, 2017: Workshop Poster Abstracts Due Date
Mar 5, 2017: Notification of Acceptance
Mar 10, 2017: Camera-ready Abstracts Due
April 4, 2017: VL'17 Workshop

Programme Committee

Raffaella Bernardi, University of Trento, Italy
Darren Cosker, University of Bath, UK
Aykut Erdem, Hacettepe University, Turkey
Jacob Goldberger, Bar Ilan University, Israel
Jordi Gonzalez, Autonomous University of Barcelona, Spain
Frank Keller, University of Edinburgh, UK
Douwe Kiela, University of Cambridge, UK
Adrian Muscat, University of Malta, Malta
Arnau Ramisa, IRI UPC Barcelona, Spain
Carina Silberer, University of Edinburgh, UK
Caroline Sporleder, Germany
Josiah Wang, University of Sheffield, UK
Further members t.b.c.


Anya Belz, University of Brighton, UK
Katerina Pastra, Cognitive Systems Research Institute (CSRI), Athens, Greece
Erkut Erdem, Hacettepe University, Turkey
Krystian Mikolajczyk, Imperial College London, UK


a.s.belz at brighton.ac.uk <mailto:a.s.belz at brighton.ac.uk>
http://vision.cs.hacettepe.edu.tr/vl2017/ <http://vision.cs.hacettepe.edu.tr/vl2017/>

This Workshop is organised by European COST Action IC1307: The European Network on Integrating Vision and Language (iV&L Net)

The explosive growth of visual and textual data (both on the World Wide Web and held in private repositories by
diverse institutions and companies) has led to urgent requirements in terms of search, processing and management
of digital content. Solutions for providing access to or mining such data depend on the semantic gap between
vision and language being bridged, which in turn calls for expertise from two so far unconnected fields: Computer
Vision (CV) and Natural Language Processing (NLP). The central goal of iV&L Net is to build a European
CV/NLP research community, targeting 4 focus themes: (i) Integrated Modelling of Vision and Language for CV
and NLP Tasks; (ii) Applications of Integrated Models; (iii) Automatic Generation of Image & Video Descriptions;
and (iv) Semantic Image & Video Search. iV&L Net will organise annual conferences, technical meetings,
partner visits, data/task benchmarking, and industry/end-user liaison. Europe has many of the world's leading
CV and NLP researchers. Tapping into this expertise, and bringing the collaboration, networking and community
building enabled by COST Actions to bear, iV&L Net will have substantial impact, in terms of advances in both
theory/methodology and real world technologies.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20170208/a7bfd376/attachment-0001.html>

More information about the visionlist mailing list