<div dir="ltr"><div dir="ltr" style="margin:0px;padding:0px;border:0px">[Apologies for cross-posting]</div><div dir="ltr" style="margin:0px;padding:0px;border:0px">--------------------------------------<br></div><div style="margin:0px;padding:0px;border:0px"><br>Deadline has been extended to <b>Apr. 23, 2020</b>.</div><div dir="ltr" style="margin:0px;padding:0px;border:0px"><br></div><div dir="ltr" style="margin:0px;padding:0px;border:0px"><b>Call For Papers</b><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">The Second International Workshop on Bringing Semantic Knowledge into Vision and Text Understanding<br></div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">@IJCAI-2020, July 11-17, Yokohama, Japan<br></div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">**Workshop website: <a href="http://cobweb.cs.uga.edu/~shengli/Tusion2020.html" rel="nofollow" target="_blank" style="margin:0px;padding:0px;border:0px;text-decoration-line:none">http://cobweb.cs.uga.edu/~shengli/Tusion2020.html</a></div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">**Submission website: <a href="https://cmt3.research.microsoft.com/Tusion2020" rel="nofollow" target="_blank" style="margin:0px;padding:0px;border:0px;text-decoration-line:none">https://cmt3.research.microsoft.com/Tusion2020</a> </div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">Extracting and understanding the high-level semantic information in vision and text data is considered as one of the key capabilities of effective artificial intelligence (AI) systems, which has been explored in many areas of AI, including computer vision, natural language processing, machine learning, data mining, knowledge representation, etc. Due to the success of deep representation learning, we have observed increasing research efforts in the intersection between vision and language for a better understanding of semantics, such as image captioning, visual question answering, etc. Besides, exploiting external semantic knowledge (e.g., semantic relations, knowledge graphs) for vision and text understanding also deserves more attention: The vast amount of external semantic knowledge could assist in having a “deeper” understanding of vision and/or text data, e.g., describing the contents of images in a more natural way, constructing a comprehensive knowledge graph for movies, building a dialog system equipped with commonsense knowledge, etc. </div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">This one-day workshop will provide a forum for researchers to review the recent progress of vision and text understanding, with an emphasis on novel approaches that involve a deeper and better semantic understanding of version and text data. The workshop is targeting a broad audience, including the researchers and practitioners in computer vision, natural language processing, machine learning, data mining, etc. </div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">This workshop will include several invited talks and peer-reviewed papers (oral and poster presentations). We encourage submissions on a variety of research topics. The topics of interest include (but not limited to): </div><div style="margin:0px;padding:0px;border:0px"><div style="margin:0px;padding:0px;border:0px"><div style="margin:0px;padding:0px;border:0px"><div style="margin:0px;padding:0px;border:0px">(1). Image and Video Captioning </div><div style="margin:0px;padding:0px;border:0px">(2). Visual Question Answering and Visual Dialog </div><div style="margin:0px;padding:0px;border:0px">(3). Scene Graph Generation from Visual Data </div><div style="margin:0px;padding:0px;border:0px">(4). Video Prediction and Reasoning </div><div style="margin:0px;padding:0px;border:0px">(5). Scene Understanding </div></div><div style="margin:0px;padding:0px;border:0px">(6). Multimodal Representation and Fusion</div></div><div style="margin:0px;padding:0px;border:0px">(7). Pretrained Models and Meta-Learning</div><div style="margin:0px;padding:0px;border:0px">(8). Explainable Text and Vision Understanding</div></div><div style="margin:0px;padding:0px;border:0px"><div style="margin:0px;padding:0px;border:0px">(9). Knowledge Graph Construction </div><div style="margin:0px;padding:0px;border:0px">(10). Knowledge Graph Embedding </div><div style="margin:0px;padding:0px;border:0px">(11). Representation Learning </div><div style="margin:0px;padding:0px;border:0px">(12). KBQA: Question Answering over Knowledge Bases </div><div style="margin:0px;padding:0px;border:0px">(13). Dialog Systems using Knowledge Graph </div><div style="margin:0px;padding:0px;border:0px">(14). Adversarial Generation of Nature Language and Images </div><div style="margin:0px;padding:0px;border:0px">(15). Transfer Learning and Domain Adaptation across Vision and Text </div><div style="margin:0px;padding:0px;border:0px">(16). Graphical Causal Models </div><div style="margin:0px;padding:0px;border:0px"><div style="margin:0px;padding:0px;border:0px"></div></div></div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">**Important Dates</div><div style="margin:0px;padding:0px;border:0px">Submission Deadline: April 23, 2020 </div><div style="margin:0px;padding:0px;border:0px">Notification: May 15, 2020 </div><div style="margin:0px;padding:0px;border:0px">Camera Ready: June 1, 2020<br></div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">**Submission Guidelines</div><div style="margin:0px;padding:0px;border:0px">Three types of submissions are invited to the workshop: long papers (up to 7 pages), short papers (up to 4 pages) and demo papers (up to 4 pages). </div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">All submissions should be formatted according to the IJCAI'2020 Formatting Instructions and Templates. Authors are required to submit their papers electronically in PDF format to the Microsoft CMT submission site: <a href="https://cmt3.research.microsoft.com/Tusion2020" rel="nofollow" target="_blank" style="margin:0px;padding:0px;border:0px;text-decoration-line:none">https://cmt3.research.microsoft.com/Tusion2020</a> </div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">At least one author of each accepted paper must register for the workshop, and the registration information can be found on the IJCAI-2020 website. The authors of accepted papers should present their work at the workshop. </div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">Any question regarding paper submission, please email us: <a href="mailto:sheng.li@uga.edu" rel="nofollow" target="_blank" style="margin:0px;padding:0px;border:0px;text-decoration-line:none">sheng.li@uga.edu</a> or <a href="mailto:yaliang.li@alibaba-inc.com" rel="nofollow" target="_blank" style="margin:0px;padding:0px;border:0px;text-decoration-line:none">yaliang.li@alibaba-inc.com</a><br></div><div style="margin:0px;padding:0px;border:0px"><br></div><div style="margin:0px;padding:0px;border:0px">**Organizers</div><div style="margin:0px;padding:0px;border:0px">Sheng Li, University of Georgia, Athens, GA, USA. </div><div style="margin:0px;padding:0px;border:0px">Yaliang Li, Alibaba Group, Bellevue, WA, USA </div><div style="margin:0px;padding:0px;border:0px">Jing Gao, University at Buffalo, Buffalo, NY, USA </div><div style="margin:0px;padding:0px;border:0px">Yun Fu, Northeastern University, Boston, MA, USA</div></div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>
</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>