[visionlist] [CfP] ICCV Workshop and (SMART-101) Challenge on Vision and Language Algorithmic Reasoning (VLAR 2023)

Anoop Cherian anoop.cherian at gmail.com
Fri Jul 14 11:27:41 -04 2023

Vision-and-Language Algorithmic Reasoning (VLAR 2023)

Workshop and Challenge

October 3, 2023, Paris, France

Held in conjunction with ICCV 2023



The focus of this workshop is to bring together researchers in multimodal
reasoning and cognitive models of intelligence towards positioning the
current research progress in AI within the overarching goal of achieving
machine intelligence. An important aspect is to bring to the forefront
problems in perception, language modeling, and cognition that are often
overlooked in state-of-the-art research and that are important for making
true progress in artificial intelligence. One specific problem that
motivated our workshop is the question of how well current deep models
learn broad yet simple skills and how well do they generalize their learned
models to solve problems that are not part of their learning set; such
skills even children learn and use effortlessly (e.g., see the paper “Are
Deep Neural Networks SMARTer than Second Graders?
<https://arxiv.org/abs/2212.09993>”). In this workshop, we plan to bring
together outstanding researchers to showcase their cutting edge research on
the above topics that will inspire the audience to bring out the missing
pieces in our quest to solve the puzzle of artificial intelligence.



* Paper Track

Submission deadline: July 20, 2023 (11:59PM EDT)

Paper decisions to authors: August 7, 2023

Camera-ready deadline: August 18, 2023 (11:59PM EDT)

* SMART-101 Challenge Track

Challenge open: June 15, 2023.

Submission deadline: September 1, 2023 (11:59PM EDT).

Arxiv paper deadline to be considered for awards: September 1, 2023
(11:59PM EDT).

Public winner announcement: October 3, 2023 (11:59PM EDT).



We invite submissions of original and high-quality research papers in the
topics related to vision-and-language algorithmic reasoning. The topics for
VLAR 2023 include, but are not limited to:

* Large language models, vision, and cognition including children’s

* Foundation models of intelligence, including vision, language, and other

* Artificial general intelligence / general-purpose problem solving

* Neural architectures for solving vision & language or language-based IQ

* Embodiment and AI

* Large language models, neuroscience, and vision

* Functional and algorithmic / procedural learning in vision

* Abstract visual-language reasoning, e.g., using sketches, diagrams, etc.

* Perceptual reasoning and decision making

* Multimodal cognition and learning

* New vision-and-language abstract reasoning tasks and datasets

* Vision-and-language applications



* We are inviting only original and previously unpublished work. Dual
submissions are not allowed.

* All submissions are handled via the workshop’s CMT Website:

* Submissions should not exceed four (4) pages in length (excluding

* Submissions should be made in PDF format and should follow the official
ICCV template and guidelines.

* All submissions should maintain author anonymity and should abide by the
ICCV conference guidelines for double-blind review.

* Accepted papers will be presented as either an oral, spotlight, or poster
presentation. At least one author of each accepted submission must present
the paper at the workshop.

* Presentation of accepted papers at our workshop will follow the same
policy as that for accepted papers at the ICCV main conference

* Accepted papers will also be part of the ICCV 2023 workshop proceedings.

* Authors may optionally upload supplementary materials, the deadline for
which is the same as that of the main paper and should be submitted



As part of VLAR 2023, we are hosting a challenge based on the Simple
Multimodal Algorithmic Reasoning Task – SMART-101 – dataset, which is
available for download here: https://smartdataset.github.io/smart101/. The
accompanying CVPR 2023 paper “Are Deep Neural Networks SMARTer than Second
Graders” is available here: https://arxiv.org/abs/2212.09993.

* The challenge is hosted on Eval AI and is open to submissions, see

* The challenge participants are required to make arXiv submissions
detailing their approach. These are only used to judge the competition, and
will not be reviewed and will not be part of workshop proceedings.

* Winners of the challenge are determined both by performance on the
leaderboard over a private test set as well as the novelty of the proposed
method (as detailed in the arXiv submission). Details are made available on
the challenge website.

* Prizes will be awarded on the day of the workshop.


Prof. Anima Anandkumar <https://www.eas.caltech.edu/people/anima>, NVIDIA &

Dr. François Chollet <https://fchollet.com/>, Google

Prof. Jitendra Malik <http://people.eecs.berkeley.edu/~malik/>, Meta & UC

Prof. Elizabeth Spelke
Harvard University

Prof. Jiajun Wu <https://jiajunwu.com/>, Stanford University


Anoop Cherian <http://users.cecs.anu.edu.au/~cherian/>, Mitsubishi Electric
Research Laboratories

Kuan-Chuan Peng <https://www.merl.com/people/kpeng>, Mitsubishi Electric
Research Laboratories

Suhas Lohit <https://www.merl.com/people/slohit>, Mitsubishi Electric
Research Laboratories

Kevin A. Smith <http://www.mit.edu/~k2smith/>, Massachusetts Institute of

Ram Ramrakhya <https://ram81.github.io/>, Georgia Institute of Technology

Honglu Zhou <https://sites.google.com/view/hongluzhou/>, NEC Laboratories
America, Inc.

Tim K. Marks <https://www.merl.com/people/tmarks>, Mitsubishi Electric
Research Laboratories

Joanna Matthiesen <https://www.linkedin.com/in/joanna-matthiesen-61a52a35/>,
Math Kangaroo USA

Joshua B. Tenenbaum <http://web.mit.edu/cocosci/josh.html>, Massachusetts
Institute of Technology



Email: vlariccv23 at googlegroups.com

SMART-101 project: https://smartdataset.github.io/smart101/

Workshop Website: https://wvlar.github.io/iccv23
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20230714/598186a8/attachment-0001.html>

More information about the visionlist mailing list