[visionlist] [CfP] [Extended Deadline and Updates] Workshop on Sign Language Recognition, Translation & Production @ ECCV 2020

Necati Cihan Camgöz cihancamgoz at gmail.com
Sun Jul 5 10:52:23 -04 2020

 *** Please accept our apologies if you receive multiple copies of this CfP

We cordially invite your submissions to our Sign Language Recognition,
Translation & Production (SLRTP) Workshop, which will be held in
conjunction with ECCV 2020 (Virtual).



Sign Language Recognition, Translation & Production (SLRTP) Workshop @ ECCV
2020 (Virtual)

*Website: *www.slrtp.com


- Paper submission: *July 19, 2020*

- Notification of acceptance: *July 26, 2020*

- Preprint and presentation submission***: *August 6, 2020*

- Workshop date: *August 23, 2020 (virtual)*

- Camera ready: *September 15, 2020*

*** *Presentation submission deadline is final and will not be extended*,
as they will be translated to ASL&BSL.



- Accepted paper will have a pre-recorded video presentation, translated
both to ASL&BSL, made avaliabile with the pre-print version of the paper to
the attendees before workshop.

- The workshop will have live Q&A session with ASL and BSL interpretations.


This workshop brings together researchers working on different aspects of
vision-based sign language research (including body posture, hands and
face) and sign language linguists. The aims are to increase the linguistic
understanding of sign languages within the computer vision community, and
also to identify the strengths and limitations of current work and the
problems that need solving. Finally, we hope that the workshop will
cultivate future collaborations.

Recent developments in image captioning, visual question answering and
visual dialogue have stimulated significant interest in approaches that
fuse visual and linguistic modelling. As spatio-temporal linguistic
constructs, sign languages represent a unique challenge where vision and
language meet. Computer vision researchers have been studying sign
languages in isolated recognition scenarios for the last three decades.
However, now that large scale continuous corpora are beginning to become
available, research has moved towards continuous sign language recognition.
More recently, the new frontier has become sign language translation and
production where new developments in generative models are enabling
translation between spoken/written language and continuous sign language
videos, and vice versa. In this workshop, we propose to bring together
researchers to discuss the open challenges that lie at the intersection of
sign language and computer vision.

In this workshop, we propose to bring together researchers to discuss the
open challenges that lie at the intersection of sign language and computer

Confirmed Speakers:

- Lale Akarun, Bogazici University

- Matt Huenerfauth, Rochester Institute of Technology

- Oscar Koller, Microsoft

- Bencie Woll, Deafness Cognition and Language Research Centre (DCAL),
University College London

Call for Papers:

Papers can be submitted to CMT at
https://cmt3.research.microsoft.com/SLRTP2020/ *by the end of July 19
(Anywhere on Earth)*. We are happy to receive submissions for both new work
as well as work which has been accepted to other venues. In line with the Sign
Language Linguistics Society (SLLS) Ethics Statement for Sign Language
Research <https://slls.eu/slls-ethics-statement/>, we encourage submissions
from Deaf researchers or from teams which include Deaf individuals,
particularly as co-authors but also in other roles (advisor, research
assistant, etc).

Suggested topics for contributions include, but are not limited to:

- Continuous Sign Language Recognition and Analysis

- Multi-modal Sign Language Recognition and Translation

- Generative Models for Sign Language Production

- Non-manual Features and Facial Expression Recognition for Sign Language

- Hand Shape Recognition

- Lip-reading/speechreading

- Sign Language Recognition and Translation Corpora

- Semi-automatic Corpora Annotation Tools

- Human Pose Estimation

Paper Format & Proceedings: See our webpage slrtp.com for the detailed

Workshop languages/accessibility: The languages of this workshop are
English, British Sign Language (BSL) and American Sign Language (ASL).
Interpretation between BSL/English and ASL/English will be provided, as
will English subtitles, for all pre-recorded and live Q&A sessions. If you
have questions about this, please contact dcal at ucl.ac.uk.


Necati Cihan Camgoz, University of Surrey

Gul Varol, University of Oxford

Samuel Albanie, University of Oxford

Richard Bowden, University of Surrey

Andrew Zisserman, University of Oxford

Kearsy Cormier, DCAL
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200705/7cef6c97/attachment-0001.html>

More information about the visionlist mailing list