[visionlist] MBCC Call for Papers

kehinger at yorku.ca kehinger at yorku.ca
Wed Feb 7 14:49:26 -05 2018


## CALL FOR PAPERS (CFP) ## CVPR International Workshop on
 ‘Mutual Benefits of Cognitive and Computer Vision (MBCC)’
 22 June 2018
 Salt Lake City 
 (https://sites.google.com/site/mbcc2018w/home
(https://sites.google.com/site/mbcc2018w/home))  
 	   (https://sites.google.com/site/mbcc2018w/home) 
 	   mbcc2018w (https://sites.google.com/site/mbcc2018w/home) 
sites.google.com (http://sites.google.com) 
  _____________
 Aim and Scope -----------------------
 As researchers working at the intersection of biological and machine
vision, we have noticed an increasing interest in both communities to
understand and improve on each other’s insights. Recent advances in
machine learning (especially deep learning) have led to unprecedented
improvements in computer vision. These deep learning algorithms have
revolutionized computer vision, and now rival humans at some narrowly
defined tasks such as object recognition (e.g., the ImageNet Large Scale
Visual Recognition Challenge). In spite of these advances, the existence of
adversarial images (some of which have perturbations imperceptible to
humans) and rather poor generalizability across datasets point out the
flaws present in these networks. On the other hand, the human visual system
remains highly efficient at solving real-world tasks and capable of solving
many visual tasks.  We believe that the time is ripe to have extended
discussions and interactions between researchers from both fields in order
to steer future research in more fruitful directions. This workshop will
compare human vision to state-of-the-art machine perception methods, with
specific emphasis on deep learning models and architectures.
 Our workshop will address many important questions. They include: 1) What
are the representational differences between human and machine perception?
2) What makes human vision so effective? and 3) What can we learn from
human vision research? Addressing these questions is not as difficult as
previously thought due to technological advancements in both computational
science and neuroscience. We can now measure human behavior precisely and
collect huge amounts of neurophysiological data using EEG and fMRI. This
places us in a unique position to compare state-of-the-art computer vision
models and human behavioral/neural data, which was impossible to do a few
years ago. However, this advantage also comes with its own set of problems:
Which task/metric to use for comparison? What are the representational
similarities? How different are the computations in a biological visual
system when compared to an artificial vision system? How does human vision
achieve invariance?
 This workshop is a great opportunity for researchers working on human
and/or machine perception to come together and discuss plausible solutions
to some of the aforementioned problems.
  __________________________________________
 Topics for submission include but are not limited to:
 -------------------------------------------------------------------------
 o architectures for processing visual information in the human brain and
computer vision (e.g. feedforward vs feedback, shallow vs deep networks,
residual, recurrent, etc)
 o limitations of existing computer vision/deep learning systems compared
to human vision
 o learning rules employed in computer vision and by the brain (e.g.
unsupervised/semi-supervised learning, Hebb rule, Spike timing dependent
plasticity)
 o representations/features in humans and computer vision
 o tasks/metrics to compare human and computer vision (e.g. eye fixation,
reaction time, rapid categorization, visual search)
 o new benchmarks (e.g. datasets)
 o generalizability of machine representation to other tasks
 o new techniques to measure and analyze human psychophysics and neural
signals
 o the problem on invariant learning
 o conducting large-scale behavioral and physiological experiments (e.g.,
fMRI, cell recording)
 ______________
 Invited Speakers ------------------------
 We have invited leading researchers from both Cognitive Science and
Computer Vision to inspire discussions and collaborations.
  1. TBA
 2. TBA
 ___________________
 Submission Guidelines
 --------------------------------- We are inviting both full paper (5-8
pages) and extended abstract (2-4 pages) submissions to the workshop.
Submitted papers must follow the CVPR paper format and guidelines
(available on the CVPR 2018 webpage). All submissions will be handled via
the CMT website: https://cmt3.research.microsoft.com/MBCC2018/
(https://cmt3.research.microsoft.com/MBCC2018/) 
 Full papers: The submitted papers should have a maximum length of 8 pages,
including figures and tables; additional pages must contain only cited
references. The review will be double-blind. Please make sure all authors
or references to authors are anonymized. Full paper submissions must not
have been published before.
 Extended abstracts: We invite submissions of extended abstracts of ongoing
or already published work as well as demos or prototype systems (CVPR
format). Authors are given the opportunity to present their work to the
right audience. The review will be single-blind.
 ______________
 Important Dates
 ------------------------
 Full Paper submission:  March 10th, 2018
 Extended Abstract submission: March 20th, 2018
 Notification of acceptance:  April 1st, 2018
 Camera-ready paper due: April 7th, 2018
 Workshop:  June 22nd, 2018 
  _____________________________
  Workshop Organizing Committee
 ---------------------------------------------------
 Ali Borji, University of Central Florida
 Krista A. Ehinger, York University
 Odelia Schwartz, University of Miami
 Gregory Zelinsky, Strony Brook University
 Hamed R. Tavakoli, Aalto University
  _______
 Contact
-------------Ali Borji, aborji at crcv.ucf.edu (mailto:aborji at crcv.ucf.edu)
Krista A. Ehinger, kehinger at yorku.ca (mailto:kehinger at yorku.ca)
Hamed R. Tavakoli, hamed.r-tavakoli at aalto.fi
(mailto:hamed.r-tavakoli at aalto.fi)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20180207/64569352/attachment.html>


More information about the visionlist mailing list