<div dir="ltr"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">Workshop on Domain Adaptation for Visual Understanding (DAVU)</span><br><div class="gmail_quote"><div dir="ltr"><div style="font-size:12.8px;font-variant-ligatures:normal;font-family:tahoma,sans-serif"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">Joint </span><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">IJCAI/ECAI/AAMAS/ICML 2018 Workshop</span></div><div style="font-size:12.8px;font-variant-ligatures:normal;font-family:tahoma,sans-serif"><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><a href="https://cmt3.research.microsoft.com/DAVU2018/" style="font-family:arial;font-size:12.8px" target="_blank">https://cmt3.research.microsof<wbr>t.com/DAVU2018/</a><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><a href="http://iab-rubric.org/ijcai-davu.html" style="font-family:arial;font-size:12.8px" target="_blank">http://iab-rubric.org/ijcai-da<wbr>vu.html</a><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">Paper Submission Deadline:<span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-Apple-converted-space"> </span></span><span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-aBn" style="border-bottom-width:1px;border-bottom-style:dashed;border-bottom-color:rgb(204,204,204);color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-aQJ">May 10, 2018</span></span></div><div style="font-size:12.8px;font-variant-ligatures:normal"><span style="font-family:arial;color:rgb(46,46,46);font-size:12.8px">Note: Extended version of accepted </span><span style="font-family:arial;color:rgb(46,46,46);font-size:12.8px">papers will be invited for consideration in one of the prestigious journals </span><span style="font-family:arial;color:rgb(46,46,46);font-size:12.8px">(approval pending).</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">Visual understanding is a fundamental cognitive ability in humans which is essential for identifying objects/people and interacting in social space. This cognitive skill makes interaction with the environment extremely effortless and provides an evolutionary advantage to humans as a species. In our daily routines, we, humans, not only learn and apply knowledge for visual recognition, we also have intrinsic abilities of transferring knowledge between related visual tasks, i.e., if the new visual task is closely related to the previous learning, we can quickly transfer this knowledge to perform the new visual task. In developing machine learning based automatedvisual recognition algorithms, it is desired to utilize these capabilities to make the algorithms adaptable. Generally traditional algorithms, given some prior knowledge in a related visual recognition task, do not adapt to a new task and have to learn the new task from the beginning. These algorithms do not consider that the two visual tasks may be related and the knowledge gained in one may be used to learn the new task efficiently in lesser time. Domain adaptation for visual understanding is the area of research, which attempts to mimic this human behavior by transferring the knowledge learned in one or more source domains and use it for learning the related visual processing task in target domain. Recent advances in domain adaptation, particularly in co-training, transfer learning, and online learning have benefited the computer vision significantly. For example, learning from high-resolution source domain images and transferring the knowledge to learning low-resolution target domain information has helped in building improved cross-resolution face recognition algorithms. This special issue will focus on the recent advances on domain adaptation for visual recognition. The organizers invite researchers to participate and submit their research papers in the Domain Adaptation workshop. Topics of interest include but are not limited to:<br></div><div style="font-size:12.8px;font-variant-ligatures:normal;font-family:tahoma,sans-serif"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">A. Novel algorithms for visual recognition using</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">1. Co-training</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">2. Transfer learning</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">3. Online (incremental/decremental) learning</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">4. Covariate shift</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">5. Heterogeneous domain adaptation</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">6. Dataset bias</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">B. Domain adaptation in visual representation learning using</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">1. Deep learning</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">2. Shared representation learning</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">3. Online (incremental/decremental) learning</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">4. Multimodal learning</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">5. Evolutionary computation-based domain adaptation algorithms</span><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">C. Applications in computer vision such as</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">1. Object recognition</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">2. Biometrics</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">3. Hyper-spectral</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">4. Surveillance</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">5. Road transportation</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">6. Autonomous driving</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">*Submission Format: *The authors should follow IJCAI paper preparation </span><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">instructions, including page length (e.g. 6 pages + 1 extra page for </span><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">reference).</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">*Important Dates: *</span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">Submission deadline:<span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-Apple-converted-space"> </span></span><span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-aBn" style="border-bottom-width:1px;border-bottom-style:dashed;border-bottom-color:rgb(204,204,204);color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-aQJ">May 10, 2018</span></span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">Decision notification:<span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-Apple-converted-space"> </span></span><span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-aBn" style="border-bottom-width:1px;border-bottom-style:dashed;border-bottom-color:rgb(204,204,204);color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span class="m_5610047669152160350gmail-m_-4494234800847525900gmail-aQJ">May25, 2018</span></span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"> </span><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><span style="color:rgb(46,46,46);font-family:arial;font-size:12.8px">*Paper Submission Page: </span><a href="https://cmt3.research.microsoft.com/DAVU2018/*" style="font-family:arial;font-size:12.8px" target="_blank">https://cmt3.research.<wbr>microsoft.com/DAVU2018/</a><br style="color:rgb(46,46,46);font-family:arial;font-size:12.8px"><br clear="all"></div><div style="font-size:12.8px;font-variant-ligatures:normal;font-family:tahoma,sans-serif">Best regards, </div><br class="m_5610047669152160350gmail-Apple-interchange-newline" style="color:rgb(0,0,0);font-family:-webkit-standard"><div><div class="m_5610047669152160350gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>--<br>Mayank Vatsa, PhD</div><div>Vice President (Publications), IEEE Biometrics Council</div><div>Head, Infosys Center for Artificial Intelligence</div><div>Associate Professor, IIIT-Delhi, India</div><div>Adjunct Associate Professor, West Virginia University, USA<br><a href="http://iab-rubric.org/" target="_blank">http://iab-rubric.org/</a><br></div><div><a href="http://cai.iiitd.ac.in/" target="_blank">http://cai.iiitd.ac.in/</a></div><div><a href="http://ieee-biometrics.org/" target="_blank">http://ieee-biometrics.org/</a><br></div></div></div></div></div></div></div></div>
</div>
</div><br></div>