[visionlist] Cognitive Vision @ Adv. in Cognitive Systems 2019 / MIT, Cambridge, Massachusetts

Mehul Bhatt Mehul.Bhatt at oru.se
Sat Apr 13 04:13:18 -04 2019


####  COGNITIVE VISION  /  2019  ####


COGNITIVE VISION
— Integrated Vision and AI for Embodied Perception and Interaction
http://www.codesign-lab.org/cogsys2019/


# CHAIRS:

Mehul Bhatt (Örebro University, Sweden)
Daniel Levin (Vanderbilt University, United States)
Parisa Kordjamshidi (Tulane University, United States)


# AS PART OF:

ACS 2019 - Advances in Cognitive Systems, August 2-5 2019
Massachusetts Institute of Technology in Cambridge, Massachusetts / United States
http://www.cogsys.org/conference/2019


The workshop on COGNITIVE VISION is organised as part of ACS 2019, the Seventh Annual Conference on Advances in Cognitive Systems (ACS 2019) to be held at the Massachusetts Institute of Technology (Cambridge, Massachusetts). The workshop will be held as a full-day event on Friday August 2 2019.


# ABOUT THE WORKSHOP #
The workshop on COGNITIVE VISION solicits contributions addressing computational vision and perception at the interface of language, logic, cognition, and artificial intelligence. The workshop brings together a novel & unique combination of academics and research methodologies encompassing AI, Cognition, and Interaction. The workshop will feature invited and contributed research advancing the practice of Cognitive Vision particularly from the viewpoints of theories and methods developed within the fields of:

—  Artificial Intelligence
—  Computer Vision
—  Spatial Cognition and Computation
—  Cognitive Linguistics
—  Cognitive Science and Psychology
—  Visual Attention, Perception. and Awareness
—  Neuroscience
Application domains being addressed include, but are not limited to:

—  autonomous driving
—  cognitive robotics - social robotics
—  vision for psychology, human behaviour studies
—  visuo-auditory perception in multimodality studies
—  vision for social science, humanities
—  social signal processing, social media
—  visual art, fashion, cultural heritage
—  vision in biology (e.g., animal, plant)
—  vision and VR / AR
—  vision for UAVs
—  remote sensing, GIS
—  medical imaging


# TECHNICAL FOCUS #

The principal emphasis of the workshop is on the integration of vision and artificial intelligence from the viewpoints of embodied perception, interaction, and autonomous control. In addition to basic research questions, the workshop addresses
diverse application areas where, for instance, the processing and semantic interpretation of (potentially large volumes of) highly dynamic visuo-spatial imagery is central: autonomous systems, cognitive robotics, medical & biological computing,
social media, cultural heritage & art, social media, psychology and behavioural research domains where data-centred analytical methods are gaining momentum. One particular challenge in these domains is to balance large-scale statistical analyses with much more selective rule-governed analysis of sparse data. These analyses may be guided by, for instance, recent research exploring the role of knowledge in constraining dynamic changes in awareness during the perception of meaningful events. Particular themes of high interest solicited by the workshop include:

—  methodological integrations between Vision and AI
—  declarative representation and reasoning about spatio-temporal dynamics
—  deep semantics and explainable visual computing (e.g., about space and motion)
—  vision and computational models of narrative
—  cognitive vision and multimodality (e.g., multimodal semantic interpretation)
—  visual perception (e.g., high-level event perception, eye-tracking, biological motion)
—  applications of visual sensemanking for social science, humanities, and human behaviour studies

The workshop emphasises application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation
from the viewpoints of visuo-auditory perception & digital media, sensemaking from (possibly multimodal) human-behaviour data where the principal component is visual imagery.

We welcome contributions addressing the workshop themes from formal, cognitive, computational, engineering, empirical,
psychological, and philosophical perspectives. Indicative topics are:

—  deep visuo-spatial semantics
—  commonsense scene understanding
—  semantic question-answering with image, video, point-clouds
—  explainable visual interpretation
—  concept learning and inference from visual stimuli
—  learning relational knowledge from dynamic visuo-spatial stimuli
—  knowledge-based vision systems
—  ontological modelling for scene semantics
—  visual analysis of sketches
—  motion representation (e.g., for embodied control)
—  action, anticipation, joint attention, and visual stimuli
—  vision, AI, and eye-tracking
—  neural-symbolic integration (for cognitive vision)

—  high-level visual perception and eye-tracking
—  high-level event perception
—  egocentric vision and perception
—  declarative reasoning about space and motion
—  computational models of narratives
—  narrative models for storytelling (from stimuli)
—  vision and linguistic summarization (e.g., of social interaction, human behavior)
—  visual perception and embodiment research (e.g., involving eye-tracking)
—  biological and artificial vision
—  biological motion
—  visuo-auditory perception
—  multimodal media annotation tools



# SUBMISSION REQUIREMENTS #

Submitted papers must be formatted according to ACS 2019 guidelines (details here: ACS 2019 guidelines). Contributions may be submitted as:

1.  technical papers (max 12 pages)
2.  position / vision statements (max 7 pages)
3.  work in progress reports or ``new'' project / initiative positioning (max 7 pages)
4.  poster abstract (e.g., for early stage PhD candidates) (max 4 pages)
5.  system demonstrations (max 4 pages)

The above page lengths DO NOT include references: contribution categories (1-3) should contain a maximum of two pages of references, whereas contribution categories (4-5) may add one page of reference to their respective page limits. Each contribution (type) will be allocated an adequate duration of presentation to be determined by the workshop committees. Poster contributions are additionally expected to bring their poster for presentation and discussion during a poster session.

Submissions should include a label describing the category of submission (as per 1-5 above) as a footnote on the first page of the paper.
All submissions should only be made (in English) electronically as PDF documents via the paper submission site at: https://easychair.org/conferences/?conf=cogvis2019


# IMPORTANT DATES IN 2019 #

We encourage expression of interest and / or registering an abstract and title (anytime before full submission deadline)

—  Submissions: June 3
—  Notification: June 24
—  Camera Ready: July 8
—  Workshop Date: August 2
—  ACS 2019 Conference: August 2 - 5 2019


# WORKSHOP CHAIRS #

Mehul Bhatt  (Örebro University, Sweden)   /   http://www.mehulbhatt.org<http://www.mehulbhatt.org/>
Daniel Levin (Vanderbilt University, United States)   /   https://www.vanderbilt.edu/psychological_sciences/bio/daniel-levin
Parisa Kordjamshidi (Tulane University, United States)   /   http://www.cs.tulane.edu/~pkordjam/

An initiative of:

CoDesign Lab   >  Cognitive Vision
www.codesign-lab.org<http://www.codesign-lab.org/>  |  www.cognitive-vision.org<http://www.cognitive-vision.org/>

Contact:   Please direct all inquiries to Mehul Bhatt via email at  >  [ mehul.bhatt AT oru.se<http://oru.se/> ]

===================================================================================


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20190413/52b3fbb7/attachment.html>


More information about the visionlist mailing list