<div dir="ltr"><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt" id="gmail-docs-internal-guid-fca000b2-f992-c6be-8ba1-6f5f45872793"><b><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">====================================================== </span></b></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">                                        1st Call for Papers               <br></span></b></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">====================================================== </span></b></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">ECCV TASK-CV Workshop on Transferring and Adapting Source Knowledge <br></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">in Computer Vision & VisDA Challenge <br></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Munich, September 14th 2018</span></p><br><div><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Workshop site: <a href="https://sites.google.com/view/task-cv2018/home">https://sites.google.com/view/task-cv2018/home</a></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Challenge site: <a href="http://ai.bu.edu/visda-2018/">http://ai.bu.edu/visda-2018/</a></span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Important Dates <br></span></b></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></b></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Paper Track<br>Submission: July 2​nd​ , 2018<br><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Notification</span>: July 15​th​ , 2018<br>Camera Readay: July 25​th​ , 2018<br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Challenge<br>Registration: April 21st​ , 2018<br>Data release: May 7th, 2018<br>Submission: August 15th, 2018<br>Notification win.: September 1​st​, 2018</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><b>Workshop Topics </b><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">A key ingredient of the recent successes in computer vision has been the availability of<br>visual data with annotations, both for training and testing, and well-established<br>protocols for evaluating the results. However, this traditional supervised learning<br>framework is limited when it comes to deployment on new tasks and/or operating in<br>new domains. In order to scale to such situations, we must find mechanisms to reuse<br>the available annotations or the models learned from them and generalize to new<br>domains and tasks.<br><br>Accordingly, TASK-CV aims to bring together research in transfer learning and domain<br>adaptation for computer vision and invites the submission of research contributions<br>on the following topics:<br>■<span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"> TL/DA focusing on specific computer vision tasks (e.g., image classification,<br>object detection, semantic segmentation, recognition, retrieval, tracking, etc.)<br>and applications (biomedical, robotics, multimedia, autonomous driving, etc.)</span><br>■ <span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">TL/DA focusing on specific visual features, models or learning algorithms for<br>challenging paradigms like unsupervised, reinforcement, or online learning</span><br>■ <span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">TL/DA in the era of convolutional neural networks (CNNs), adaptation effects<br>of fine-tuning, regularization techniques, transfer of architectures and weights, etc.</span><br>■ <span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Comparative studies of different TL/DA methods and transferring part representations <br></span></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">between categories and 2D/3D modalities</span><br>■ <span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Working frameworks with appropriate CV-oriented datasets and evaluation<br>protocols to assess TL/DA</span><br><br>This is not a closed list; thus, we welcome other related research for TASK-CV.<br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><b>VisDA Challenge</b><br>The VisDA challenge aims to test domain adaptation methods’ ability to transfer<br>source knowledge and adapt it to novel target domains.</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><b><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Organizers</span></b></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"></p>Tatiana Tommasi , IIT Milan-Italy<br>David Vázquez, Element AI<br>Kate Saenko, Boston University<br>Ben Usman, Boston University<br>Xingchao Peng, Boston University<br>Judy Hoffman, UC Berkeley<br>Neela Kaushik, Boston University<br>Kuniaki Saito, Boston University<br>Antonio M. López, UAB/CVC<br>Wen Li, ETH Zurich<br>Francesco Orabona, Boston University<br></div></div>