[visionlist] Workshop announcement: Hierarchical Multisensory Integration: Theory and Experiments
Max Riesenhuber
Max.Riesenhuber at georgetown.edu
Thu Mar 16 06:46:32 -05 2017
*Hierarchical Multisensory Integration: Theory and Experiments*Barcelona,
Spain, June 18-19, 2017
The ability to map sensory inputs to meaningful semantic labels, i.e., to
recognize objects, is foundational to cognition, and the human brain excels
at object recognition tasks across sensory domains. Examples include
perceiving spoken speech, reading written words, even recognizing tactile
Braille patterns. In each sensory modality, processing appears to be
realized by multi-stage processing hierarchies in which tuning complexity
grows gradually from simple features in primary sensory areas to complex
representations in higher-level areas that ultimately interface with
task-related circuits in prefrontal/premotor cortices.
Crucially, real world stimuli usually do not have sensory signatures in
just one modality but activate representations in different sensory
domains, and successfully integrating these different hierarchical
representations appears to be of key importance for cognition. Prior
theoretical work has mostly focused on tackling multisensory integration at
isolated processing stages, and the computational functions and benefits of
*hierarchical* multisensory interactions are still unclear. For instance,
what characteristics of the input determine at which levels of two linked
sensory processing hierarchies cross-sensory integration occurs? Can these
connections form through unsupervised learning, just based on temporal
coincidence? Which stages are connected: For instance, is there selective
audio-visual integration just at a low level of the hierarchy, e.g., to
enable letter-by-letter reading, or even earlier levels at the level of
primary sensory cortices, with multisensory selectivity in higher
hierarchical levels then resulting from feedforward processing within each
hierarchy, or are there selective connections at multiple hierarchical
levels? What are the computational advantages of different cross-sensory
connection schemes? What are roles for “top-down” vs. “lateral” inputs in
learning cross-hierarchical connections? What are computationally efficient
ways to leverage prior learning from one modality in learning
hierarchical representations
in a new modality?
The workshop will gather a small group of experts to informally exchange
the latest ideas and findings, both experimental and theoretical, in the
field of multisensory integration. It will consist of two days packed with
talks by invited speakers as well as discussions. There will also be a
poster session. Researchers, postdocs and graduate students interested in
multisensory integration and hierarchical processing are all invited to
apply. Click here
<http://eventum.upf.edu/event_detail/8963/sections/6797/event-details.html>
(
http://eventum.upf.edu/event_detail/8963/detail/pire-workshop_summer-school-2017.html
) for more information on the event and to register.
This event is jointly organized by the Center for Brain and Cognition at
the Universitat Pompeu Fabra in Barcelona and Georgetown University, with
funding from the National Science Foundation and the Spanish Ministry of
Economy, Industry and Competitiveness.
--
Maximilian Riesenhuber
Lab for Computational Cognitive Neuroscience
Department of Neuroscience
Georgetown University Medical Center
Research Building Room WP-12
3970 Reservoir Rd., NW
Washington, DC 20007
phone: 202-687-9198 * email: max.riesenhuber at georgetown.edu
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=mr287@georgetown.edu>
http://maxlab.neuro.georgetown.edu
public key ID 0x8696063709CCE3BB
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20170316/145f1c8f/attachment.html>
More information about the visionlist
mailing list