<div dir="ltr"><div>Dear General Vision Sciences & Engineering communities,<br></div><div><br></div><div>The goal of the <b>Fourth Workshop on Shared Visual Representations in Human and Machine Intelligence (SVRHM) at NeurIPS 2022 </b>is
to discuss and disseminate relevant findings and parallels between the
computational neuro/cognitive science and machine learning/artificial
intelligence communities.</div><br>In the past few years, machine
learning tools — especially deep neural networks — have permeated the
vision/cognitive/neuroscience communities to become the leading
computational models that describe many cognitive tasks. Huge strides
are also being made in the machine learning/artificial intelligence
community with biologically inspired algorithms providing large
efficiency gains in both computational and learning capabilities.
However, many mysteries remain with regards to the alignment of human
and machine perception, and there are cases where we see divergent
rather than convergent representations. To resolve such questions, this
workshop aims to bring fruitful discussions between scientists and
engineers with multi-disciplinary backgrounds to review the recent
progress in shared visual representations in both humans and machines,
and in doing so identifying road-blocks and areas of interest to further
accelerate the growth of both fields.<br><br>The workshop will include a
series of talks and panel discussions from a diverse group of speakers
from both industry and academia who will share their research at the
intersection of humans and machines that pushes the field of vision
forward. The aim of our Call for Papers is to bring together scientists
and engineers to <u>share their work in progress at the Poster Session</u> that are applicable to the scope of the Workshop. <b>This year in SVRHM, the<u> 6 highest scoring papers</u> will also be awarded an Oral Presentation entry in the program.</b><br><br><b><u>The following areas provide a sense of suitable topics for 4 to 5 page paper submissions:</u></b><br><ul><li> Biological inspiration and inductive bias in vision</li><li> Human-relevant strategies for robustness and generalization</li><li> New datasets (e.g., for comparing humans/animals and machines) </li><li> Biologically-driven self-supervision</li><li> Perceptual invariance and metamerism</li><li> Biologically-informed strategies to mitigate adversarial vulnerability</li><li> Foveation, active perception, and attention models</li><li> Intuitive physics</li><li> Biologically inspired Generative Models</li><li> Perceptual and cognitive robustness</li><li> Nuances and noise in perceptual and cognitive systems</li><li> Creative problem-solving</li><li> Differences and similarities between humans and deep neural networks</li><li> Canonical computations in biological and artificial systems</li><li> Alternative architectures for deep neural networks</li><li> Reverse engineering of the human visual system via deep neural networks</li><li> <b>New:</b> Biologically-inspired Applications to Augmented and/or Virtual Reality</li><li> <b>New:</b> Understanding of novel hand-engineered models e.g. Transformers and Capsule Networks</li><li> <b>New:</b> Computational Aesthetics/Memorability/Humor/Virality</li></ul><br>We
will be awarding a new NVIDIA Titan GPU as the AI Best Paper Award, and
depending on the levels of attendance will also provide reimbursements
for first authors of accepted papers (to be confirmed), in addition to a
tentative evening reception.<br><br>Link to the workshop with additional details for the Call for Papers:<br><div><a href="https://www.svrhm.com" target="_blank">https://www.svrhm.com</a></div><div></div><a href="https://www.svrhm.com/call-for-papers" target="_blank">https://www.svrhm.com/call-for-papers</a><br>Link to Paper workshop submission: <a href="https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/SVRHM" target="_blank">https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/SVRHM</a><br><br><b>The submission deadline is September 22nd, 11.59pm PST, 2022.</b><br><br><u>Paper Acceptance:</u>
There will be no cap on paper acceptance percentage and our criteria
for acceptance will continue to be to accept all papers with an average
(1-10) <b>score of 5.5 and above</b>. This strict cut-off removes many
conflicts of interest in addition to further discussion windows that due
to the time horizon are not possible. Note: Paper And Reviewing
standards are still very high and there is currently an approximate 60%
acceptance rate with very high quality works that are later published in
Tier-1 venues.<br><br><u>Reviewing:</u> If you are interested in
becoming a Reviewer and have published/reviewed for this or similar
venues before, please feel free to contact us as we are aiming for
providing 3-4 reviews per submitted paper, where each reviewer will
review no more than 2 papers.<br><br>We are also continually looking for
more sponsors that may provide Best Paper or Best Reviewer Awards. If
you are interested in getting involved, let us know.<br><br>All questions & inquiries regarding the workshop should be sent to: <a href="mailto:svrhm2022@gmail.com" target="_blank">svrhm2022@gmail.com</a> , our Twitter account: @svrhm_workshop (preferred) and/or by directly contacting any of the organizers.<br><br><b>The
workshop will happen *in-person* this year on December 2nd 2022 in New
Orleans, Louisiana; USA. Virtual Attendance options will also be
provided.</b><br><br>Looking forward to seeing you all!<br><br>Sincerely,<br><br>The Organizers<br>Arturo Deza, Joshua Peterson, Apruva Ratan Murty, Tom Griffiths<br><br><i>The
SVRHM workshop is currently sponsored by MIT’s Center from Brains,
Minds and Machines (CBMM), the National Science Foundation (NSF),
Artificio & NVIDIA.</i><font color="#888888"><br clear="all"></font><br>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Arturo Deza, Ph.D.</div><div>PostDoctoral Research Associate<br></div>Center for Brains, Minds and Machines</div><div>Massachusetts Institute of Technology</div><div><a href="https://cbmm.mit.edu/about/people/deza" target="_blank">https://cbmm.mit.edu/about/people/deza</a><br></div></div></div></div></div></div></div></div></div></div>