<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Call For Papers<div class=""><b class="">The 1st Workshop on Future Video Conferencing (FVC)</b><br class="">In Conjunction with IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021<br class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class="">Motivation and Aims</b><br class=""><br class=""></div><div class="">Today's demand for teleconferencing is booming, and that demand for video-based remote collaboration is unprecedentedly growing during the COVID-19 global pandemic. Video enabled communication is becoming more important than ever for ensuring an organization can continue to operate productively. Meanwhile, CV/AI techniques are quickly taking the central role in driving this growth by creating video conferencing applications that deliver more natural, contextual, and relevant meeting experiences. For example, high-quality video matting and synthesis is crucial to the now-essential functionality of virtual background; gaze correction and gesture tracking can add to interactive user engagement; automatic color and light correction can improve the user’s visual appearance and self-image; and all those have to be backed up by high-efficacy video compression/transmission and efficient edge processing which can also benefit from AI advances nowadays. While we seem to already start embracing a mainstream adoption of AI-based video collaboration, we recognize that building the next-generation video conferencing system involves multi-fold interdisciplinary challenges, and face many technical gaps to close.</div><div class=""><br class="">In this workshop, we aim to collectively address this core question: what CV techniques are/will be ready for the next-generation video conference, and how will they fundamentally change the experience of remote work, education and more? We aim to bring together experts and researchers in interdisciplinary fields to discuss the recent advances along these topics and to explore new directions. As one of the expected workshop outcomes, we expect to generate a joint report defining the key CV problems, characterizing the technical demands and barriers, and discussing potential solutions or discussions. Centered at this theme, this proposed workshop aims to provide the first comprehensive forum for CVPR researchers, to systematically discuss relevant techniques that we can contribute to as a community. <br class="">For more details please refer to <a href="https://fvc-workshop.github.io" class="">https://fvc-workshop.github.io</a>.</div><div class=""><br class=""></div><div class=""><b class="">Topics of Interest</b><br class=""><br class=""></div><div class="">Topics include but are not limited to:<br class=""><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Image display and quality enhancement for teleconferencing<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Video compression and transmission for teleconferencing<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Video object segmentation, matting, and synthesis (for virtual background, etc.)<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• HCI (gesture recognition, head tracking, gaze tracking, etc.), AR and VR applications in video conferencing<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Efficient video processing on the edge and IoT camera devices<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Multi-modal information processing and fusion in video conferencing (audio transcription, image to text, video captioning, etc.)<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Societal and Ethical Aspects: privacy intrusion & protection, attention engagement, fatigue avoidance, etc<br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Emerging Applications where video conferencing would be the cornerstone: remote education, telemedicine, etc.<br class=""><br class=""></div>... and many more interesting features.<br class=""><br class=""></div><div class=""><br class=""></div><div class=""><b class="">Important Dates</b><br class=""><br class=""></div><div class="">Please prepare your submissions using CVPR submission format, the content should between 4 and 8 pages. All the accepted papers will be published in proceedings.<br class=""><br class="">Paper Submission Deadline:<span class="Apple-tab-span" style="white-space:pre"> </span>April 2, 2021 (11: 59 pm PDT)<br class="">Notification of Paper Acceptance: April 16, 2021 (11: 59 pm PDT)<br class="">Paper Camera Ready<span class="Apple-tab-span" style="white-space:pre"> </span>: April 18, 2021 (11: 59 pm PDT)<br class="">Website: <a href="https://cmt3.research.microsoft.com/FVC2021" class="">https://cmt3.research.microsoft.com/FVC2021</a></div><div class=""><br class=""><br class=""><b class="">Organizers</b><br class=""><br class=""></div><div class=""><b class="">Yunchao Wei</b> University of Technology Sydney<br class="">Email: <a href="mailto:Yunchao.Wei@uts.edu.au" class="">Yunchao.Wei@uts.edu.au</a></div><div class=""> <br class=""><b class="">Humphrey Shi</b> University of Oregon<br class="">Email: <a href="mailto:shihonghui3@gmail.com" class="">shihonghui3@gmail.com</a></div><div class=""> <br class=""><b class="">Zhangyang (Atlas) Wang</b> University of Texas at Austin<br class="">Email: <a href="mailto:atlaswang@utexas.edu" class="">atlaswang@utexas.edu</a></div><div class=""><br class=""><b class="">Jiaying Liu</b> Peking University<br class="">Email: <a href="mailto:liujiaying@pku.edu.cn" class="">liujiaying@pku.edu.cn</a></div><div class=""> <br class=""><b class="">Vicky Kalogeiton</b> École Polytechnique<br class="">Email: <a href="mailto:vicky.kalogeiton@polytechnique.edu" class="">vicky.kalogeiton@polytechnique.edu</a></div><div class=""><br class=""><b class="">Jiashi Feng</b> National University of Singapore<br class="">Email: <a href="mailto:elefjia@nus.edu.sg" class="">elefjia@nus.edu.sg</a></div><div class=""><br class=""><b class="">Marta Mrak</b> Queen Mary University of London <br class="">Email: <a href="mailto:Marta.Mrak@bbc.co.uk" class="">Marta.Mrak@bbc.co.uk</a></div></body></html>