[visionlist] [CFP] IJCAI 2020 Workshop on Bringing Semantic Knowledge into Vision and Text Understanding

Sheng Li lisheng1989 at gmail.com
Wed Apr 1 10:39:03 -04 2020

[Apologies for cross-posting]

Deadline has been extended to *Apr. 23, 2020*.

*Call For Papers*

The Second International Workshop on Bringing Semantic Knowledge into
Vision and Text Understanding

@IJCAI-2020, July 11-17, Yokohama, Japan

**Workshop website: http://cobweb.cs.uga.edu/~shengli/Tusion2020.html

**Submission website: https://cmt3.research.microsoft.com/Tusion2020

Extracting and understanding the high-level semantic information in vision
and text data is considered as one of the key capabilities of effective
artificial intelligence (AI) systems, which has been explored in many areas
of AI, including computer vision, natural language processing, machine
learning, data mining, knowledge representation, etc. Due to the success of
deep representation learning, we have observed increasing research efforts
in the intersection between vision and language for a better understanding
of semantics, such as image captioning, visual question answering, etc.
Besides, exploiting
external semantic knowledge (e.g., semantic relations, knowledge graphs)
for vision and text understanding also deserves more attention: The vast
amount of external semantic knowledge could assist in having a “deeper”
understanding of vision and/or text data, e.g., describing the contents of
images in a more natural way, constructing a comprehensive knowledge graph
for movies, building a dialog system equipped with commonsense knowledge,

This one-day workshop will provide a forum for researchers to review the
recent progress of vision and text understanding, with an emphasis on novel
approaches that involve a deeper and better semantic understanding of
version and text data. The workshop is targeting a broad audience,
including the researchers and practitioners in computer vision, natural
language processing, machine learning, data mining, etc.

This workshop will include several invited talks and peer-reviewed papers
(oral and poster presentations). We encourage submissions on a variety of
research topics. The topics of interest include (but not limited to):
(1). Image and Video Captioning
(2). Visual Question Answering and Visual Dialog
(3). Scene Graph Generation from Visual Data
(4). Video Prediction and Reasoning
(5). Scene Understanding
(6). Multimodal Representation and Fusion
(7). Pretrained Models and Meta-Learning
(8). Explainable Text and Vision Understanding
(9). Knowledge Graph Construction
(10). Knowledge Graph Embedding
(11). Representation Learning
(12). KBQA: Question Answering over Knowledge Bases
(13). Dialog Systems using Knowledge Graph
(14). Adversarial Generation of Nature Language and Images
(15). Transfer Learning and Domain Adaptation across Vision and Text
(16). Graphical Causal Models

**Important Dates
Submission Deadline: April 23, 2020
Notification: May 15, 2020
Camera Ready: June 1, 2020

**Submission Guidelines
Three types of submissions are invited to the workshop: long papers (up to
7 pages), short papers (up to 4 pages) and demo papers (up to 4 pages).

All submissions should be formatted according to the IJCAI'2020 Formatting
Instructions and Templates. Authors are required to submit their papers
electronically in PDF format to the Microsoft CMT submission site:

At least one author of each accepted paper must register for the workshop,
and the registration information can be found on the IJCAI-2020 website.
The authors of accepted papers should present their work at the workshop.

Any question regarding paper submission, please email us: sheng.li at uga.edu
 or yaliang.li at alibaba-inc.com

Sheng Li, University of Georgia, Athens, GA, USA.
Yaliang Li, Alibaba Group, Bellevue, WA, USA
Jing Gao, University at Buffalo, Buffalo, NY, USA
Yun Fu, Northeastern University, Boston, MA, USA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200401/347b1a74/attachment-0001.html>

More information about the visionlist mailing list