<div dir="ltr"><div class="gmail_default" style=""><span id="gmail-m_-3619212393421385275m_-7970119898652032136gmail-m_-897496477774376951gmail-docs-internal-guid-b64abadc-7fff-cbdf-701a-a68e7f64ecb1" style=""><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> <span style="color:rgb(33,33,33);font-family:"Segoe UI","Segoe WP","Segoe UI WPC",Tahoma,Arial,sans-serif">*** Please accept our apologies if you receive multiple copies of this CfP ***</span><br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;font-weight:700;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><font face="verdana, sans-serif"><span style="color:rgb(33,33,33)">We cordially invite your submissions to our </span>Sign Language Recognition, Translation & Production (SLRTP) Workshop, which will be held in conjunction with ECCV 2020 (Virtual).</font></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> <br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt">---------------------------------------------------------------------------------------  <br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;font-size:11pt;font-weight:700;white-space:pre-wrap">CALL FOR PAPERS</span><br></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;font-size:11pt;font-weight:700;white-space:pre-wrap"><br></span></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;font-size:11pt;font-weight:700;white-space:pre-wrap">Sign Language Recognition, Translation & Production (SLRTP) Workshop @ ECCV 2020 (Virtual)</span><br></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Website: </b><a href="https://www.slrtp.com/" target="_blank" style="text-decoration-line:none;font-weight:700">www.slrtp.com</a></span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Dates:</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Paper submission: <b>July 19, 2020</b></span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Notification of acceptance: <b>July 26, 2020</b></span></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Preprint and presentation submission<b>*</b>: <b>August 6, 2020</b></span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Workshop date: <b>August 23, 2020 (virtual)</b></span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Camera ready: <b>September 15, 2020</b></span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b>*</b> <b>Presentation submission deadline is final and will not be extended</b>, as they will be translated to ASL&BSL.</p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt">------------------------------------------------------<br></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b>Updates:</b></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b><br></b></p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt">- Accepted paper will have a pre-recorded video presentation, translated both to ASL&BSL, made avaliabile with the pre-print version of the paper to the attendees before workshop.</p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt">- The workshop will have live Q&A session with ASL and BSL interpretations.</p><p style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt">------------------------------------------------------</p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This workshop brings together researchers working on different aspects of <span class="gmail-il">vision</span>-based sign language research (including body posture, hands and face) and sign language lingui</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">sts. The aims are to increase the linguistic understanding of sign languages within the computer <span class="gmail-il">vision</span> community, and also to identify the strengths and limitations of current work and the problems that need solving. Finally, we hope that the workshop will cultivate future collaborations.</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Recent developments in image captioning, visual question answering and visual dialogue have stimulated significant interest in approaches that fuse visual and linguistic modelling. As spatio-temporal linguistic constructs, sign languages represent a unique challenge where <span class="gmail-il">vision</span> and language meet. Computer <span class="gmail-il">vision</span> researchers have been studying sign languages in isolated recognition scenarios for the last three d</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">ecades. However, now that large scale continuous corpora are beginning to become available, research has moved towards continuous sign language recognition. More recently, the new frontier has become sign language translation and production where new developments in generative models are enabling translation between spoken/written language and continuous sign language videos, and vice versa. In this workshop, we propose to bring together researchers to discuss the open challenges that lie at the intersection of sign language and computer <span class="gmail-il">vision</span>.</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">In this workshop, we propose to bring together researchers to discuss the open challenges that lie at the intersection of sign language and computer <span class="gmail-il">vision</span>.</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Confirmed Speakers:</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Lale Akarun, Bogazici University</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Matt Huenerfauth, Rochester Institute of Technology</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Oscar Koller, Microsoft</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Bencie Woll,</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> Deafness Cognition and Language Research Centre (DCAL), University College London</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Call for Papers:</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Papers can be submitted to CMT at </span><a href="https://cmt3.research.microsoft.com/SLRTP2020/" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://cmt3.research.microsoft.com/SLRTP2020/</span></a><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> <b>by the end of July 19 (Anywhere on Earth)</b>. We are happy to receive submissions for both new work as well as work which has been accepted to other venues. In line with the </span><a href="https://slls.eu/slls-ethics-statement/" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Sign Language Linguistics Society (SLLS) Ethics Statement for Sign Language Research</span></a><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">, we encourage submissions from Deaf researchers or from teams which include Deaf individuals, particularly as co-authors but also in other roles (advisor, research assistant, etc).</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Suggested topics for contributions include, but are not limited to:</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Continuous Sign Language Recognition and Analysis</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Multi-modal Sign Language Recognition and Translation</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Generative Models for Sign Language Production</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Non-manual Features and Facial Expression Recognition for Sign Language</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Hand Shape Recognition</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Lip-reading/speechreading</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Sign Language Recognition and Translation Corpora</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Semi-automatic Corpora Annotation Tools</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">- Human Pose Estimation</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Paper Format & Proceedings:</span><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> See our webpage </span><a href="http://slrtp.com/" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">slrtp.com</span></a><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> for the detailed information.</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Workshop languages/accessibility: </span><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The languages of this workshop are English, British Sign Language (BSL) and American Sign Language (ASL). Interpretation between BSL/English and ASL/English will be provided, as will English subtitles, for all pre-recorded a</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">nd live Q&A sessions. If you have questions about this, please contact </span><a href="mailto:dcal@ucl.ac.uk" target="_blank" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">dcal@ucl.ac.uk</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">.</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Organizers:</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Necati Cihan Camgoz, University of Surrey</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Gul Varol, University of Oxford</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Samuel Albanie, University of Oxford</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Richard Bowden, University of Surrey</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Andrew Zisserman, University of Oxford</span></p><p dir="ltr" style="font-family:verdana,sans-serif;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Kearsy Cormier, DCAL</span></p></span><font color="#888888" style="font-family:verdana,sans-serif"><br style="font-family:Arial,Helvetica,sans-serif"></font></div></div>