[visionlist] Cognitive Vision @ Adv. in Cognitive Systems 2019 / MIT, Cambridge, Massachusetts

Mehul Bhatt Mehul.Bhatt at oru.se
Fri May 10 12:24:26 -04 2019

++++  CALL FOR PAPERS  ++++

— Integrated Vision and AI for Embodied Perception and Interaction

Mehul Bhatt (Örebro University, Sweden)
Daniel Levin (Vanderbilt University, United States)
Parisa Kordjamshidi (Tulane University, United States)

ACS 2019 - Advances in Cognitive Systems, August 2-5 2019
Massachusetts Institute of Technology in Cambridge, Massachusetts / United States

The workshop on COGNITIVE VISION is organised as part of ACS 2019, the Seventh Annual Conference on Advances in Cognitive Systems (ACS 2019) to be held at the Massachusetts Institute of Technology (Cambridge, Massachusetts). The workshop will be held as a full-day event on Friday August 2 2019.

The workshop on COGNITIVE VISION solicits contributions addressing computational vision and perception at the interface of language, logic, cognition, and artificial intelligence. The workshop brings together a novel & unique combination of academics and research methodologies encompassing AI, Cognition, and Interaction. The workshop will feature invited and contributed research advancing the practice of Cognitive Vision particularly from the viewpoints of theories and methods developed within the fields of:

—  Artificial Intelligence
—  Computer Vision
—  Spatial Cognition and Computation
—  Cognitive Linguistics
—  Cognitive Science and Psychology
—  Visual Attention, Perception. and Awareness
—  Neuroscience
Application domains being addressed include, but are not limited to:

—  autonomous driving
—  cognitive robotics - social robotics
—  vision for psychology, human behaviour studies
—  visuo-auditory perception in multimodality studies
—  vision for social science, humanities
—  social signal processing, social media
—  visual art, fashion, cultural heritage
—  vision in biology (e.g., animal, plant)
—  vision and VR / AR
—  vision for UAVs
—  remote sensing, GIS
—  medical imaging


The principal emphasis of the workshop is on the integration of vision and artificial intelligence from the viewpoints of embodied perception, interaction, and autonomous control. In addition to basic research questions, the workshop addresses
diverse application areas where, for instance, the processing and semantic interpretation of (potentially large volumes of) highly dynamic visuo-spatial imagery is central: autonomous systems, cognitive robotics, medical & biological computing, social media, cultural heritage & art, social media, psychology and behavioural research domains where data-centred analytical methods are gaining momentum. One particular challenge in these domains is to balance large-scale statistical analyses with much more selective rule-governed analysis of sparse data. These analyses may be guided by, for instance, recent research exploring the role of knowledge in constraining dynamic changes in awareness during the perception of meaningful events. Particular themes of high interest solicited by the workshop include:

—  methodological integrations between Vision and AI
—  declarative representation and reasoning about spatio-temporal dynamics
—  deep semantics and explainable visual computing (e.g., about space and motion)
—  vision and computational models of narrative
—  cognitive vision and multimodality (e.g., multimodal semantic interpretation)
—  visual perception (e.g., high-level event perception, eye-tracking, biological motion)
—  applications of visual sensemanking for social science, humanities, and human behaviour studies

The workshop emphasises application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation from the viewpoints of visuo-auditory perception & digital media, sensemaking from (possibly multimodal) human-behaviour data where the principal component is visual imagery.

We welcome contributions addressing the workshop themes from formal, cognitive, computational, engineering, empirical,
psychological, and philosophical perspectives. Indicative topics are:

—  deep visuo-spatial semantics
—  commonsense scene understanding
—  semantic question-answering with image, video, point-clouds
—  explainable visual interpretation
—  concept learning and inference from visual stimuli
—  learning relational knowledge from dynamic visuo-spatial stimuli
—  knowledge-based vision systems
—  ontological modelling for scene semantics
—  visual analysis of sketches
—  motion representation (e.g., for embodied control)
—  action, anticipation, joint attention, and visual stimuli
—  vision, AI, and eye-tracking
—  neural-symbolic integration (for cognitive vision)
—  high-level visual perception and eye-tracking
—  high-level event perception
—  egocentric vision and perception
—  declarative reasoning about space and motion
—  computational models of narratives
—  narrative models for storytelling (from stimuli)
—  vision and linguistic summarization (e.g., of social interaction, human behavior)
—  visual perception and embodiment research (e.g., involving eye-tracking)
—  biological and artificial vision
—  biological motion
—  visuo-auditory perception
—  multimodal media annotation tools


Submitted papers must be formatted according to ACS 2019 guidelines (details here: ACS 2019 guidelines). Contributions may be submitted as:

1.  technical papers (max 12 pages)
2.  position / vision statements (max 7 pages)
3.  work in progress reports or ``new'' project / initiative positioning (max 7 pages)
4.  poster abstract (e.g., for early stage PhD candidates) (max 4 pages)
5.  system demonstrations (max 4 pages)

The above page lengths DO NOT include references: contribution categories (1-3) should contain a maximum of two pages of references, whereas contribution categories (4-5) may add one page of reference to their respective page limits. Each contribution (type) will be allocated an adequate duration of presentation to be determined by the workshop committees. Poster contributions are additionally expected to bring their poster for presentation and discussion during a poster session.

Submissions should include a label describing the category of submission (as per 1-5 above) as a footnote on the first page of the paper.
All submissions should only be made (in English) electronically as PDF documents via the paper submission site at: https://easychair.org/conferences/?conf=cogvis2019


We encourage expression of interest and / or registering an abstract and title (anytime before full submission deadline)

—  Submissions: June 3
—  Notification: June 24
—  Camera Ready: July 8
—  Workshop Date: August 2
—  ACS 2019 Conference: August 2 - 5 2019



—  Mehul Bhatt  (Örebro University, Sweden)   /   http://www.mehulbhatt.org<http://www.mehulbhatt.org/>
—  Daniel Levin (Vanderbilt University, United States)   /   https://www.vanderbilt.edu/psychological_sciences/bio/daniel-levin
—  Parisa Kordjamshidi (Tulane University, United States)   /   http://www.cs.tulane.edu/~pkordjam/


— Somak Aditya (Adobe Research - BEL, India)
— Amir Aly (Ritsumeikan University, Japan)
— Bonny Banerjee (University of Memphis, United States)
— Chitta Baral (Arizona State University, United States)
— Andrei Barbu (Massachusetts Institute of Technology, United States)
— Melissa Beck (Louisiana State University, United States)
— Ralph Ewerth (Leibniz Universität, Germany)
— Zoe Falomir (University of Bremen, Germany)
— Roberta Ferrario (Italian National Research Council - CNR, Italy)
— Hiranmay Ghosh (Tata Consultancy Services, India)
— Shyamanta M. Hazarika (Indian Institute of Technology Guwahati, India)
— Paul Hemeren (University of Skövde, Sweden)
— Maithilee Kunda (Vanderbilt University, United States)
— Francesca Lisi (Università degli Studi di Bari "Aldo Moro", Italy)
— Antonio Lieto (University of Turin, Italy)
— Yanxi Liu (Penn State University, United States)
— Rudolf Mester (Norwegian University of Science and Technology, Norway)
— Paulo E. Santos (Centro Universitário da FEI, Brazil)
— Michael Spranger (Sony CSL, Japan)
— Mohan Sridharan (University of Birmingham, United Kingdom)
— Jakob Suchan (University of Bremen, Germany)
— David Vernon (Carnegie Mellon University Africa, Rwanda)

An initiative of:

CoDesign Lab   >  Cognitive Vision
www.codesign-lab.org<http://www.codesign-lab.org/>  |  www.cognitive-vision.org<http://www.cognitive-vision.org/>

Contact:   Please direct all inquiries to Mehul Bhatt via email at  >  [ mehul.bhatt AT oru.se<http://oru.se/> ]


Professor - School of Science and Technology - Örebro University, Sweden
www.mehulbhatt.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mehulbhatt.org_&d=DwMGaQ&c=aLnS6P8Ng0zSNhCF04OWImQ_He2L69sNWG3PbxeyieE&r=_jnCXoQ5_E4TUArpC91emg&m=rVqpvqsojAUXWvg6tcgzQ3t2WDa6Rm4rWm-AcR6_geM&s=eDjBgclOKuAW69l2zhtEU0zvTHk0RZ9xRUBPTnDU0Nk&e=>  |  SE:  +46 19 303251  /  mehul.bhatt at oru.se<mailto:mehul.bhatt at oru.se>

CoDesign Lab EU   /   Cognition. AI. Interaction. Design.
www.codesign-lab.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.codesign-2Dlab.org_&d=DwMGaQ&c=aLnS6P8Ng0zSNhCF04OWImQ_He2L69sNWG3PbxeyieE&r=_jnCXoQ5_E4TUArpC91emg&m=rVqpvqsojAUXWvg6tcgzQ3t2WDa6Rm4rWm-AcR6_geM&s=rH9iQEljEkMldQ12fm917oD2hbrnnoWYVF6CTSDxfdE&e=>   /   info at codesign-lab.org<mailto:info at codesign-lab.org>

DesignSpace Group  |  Cognitive Vision  |  Spatial Reasoning
www.design-space.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.design-2Dspace.org_&d=DwMGaQ&c=aLnS6P8Ng0zSNhCF04OWImQ_He2L69sNWG3PbxeyieE&r=_jnCXoQ5_E4TUArpC91emg&m=rVqpvqsojAUXWvg6tcgzQ3t2WDa6Rm4rWm-AcR6_geM&s=IltzvSmIC3cjzUFKbdUSf4zbdh-aXZwA-f3tCz-_Ivk&e=>  |  www.cognitive-vision.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cognitive-2Dvision.org_&d=DwMGaQ&c=aLnS6P8Ng0zSNhCF04OWImQ_He2L69sNWG3PbxeyieE&r=_jnCXoQ5_E4TUArpC91emg&m=rVqpvqsojAUXWvg6tcgzQ3t2WDa6Rm4rWm-AcR6_geM&s=eymAFGDfO0iiSnqX10NJbj544erEU_i2eYkQRs7S0ww&e=>  |  www.spatial-reasoning.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.spatialreasoning.com_&d=DwMGaQ&c=aLnS6P8Ng0zSNhCF04OWImQ_He2L69sNWG3PbxeyieE&r=_jnCXoQ5_E4TUArpC91emg&m=rVqpvqsojAUXWvg6tcgzQ3t2WDa6Rm4rWm-AcR6_geM&s=1CuQEfTTVPDu5g5k-ar4cXIMVIyk9XHLkSDu5S5kUac&e=>


=aHR0cHM6Ly91cmxkZWZlbnNlLnByb29mcG9pbnQuY29tL3YyL3VybD91PWh0dHAtM0FfX3d3dy5jb2duaXRpdmUtMkR2aXNpb24ub3JnXyZkPUR3TUdhUSZjPWFMblM2UDhOZzB6U05oQ0YwNE9XSW1RX0hlMkw2OXNOV0czUGJ4ZXlpZUUmcj1fam5DWG9RNV9FNFRVQXJwQzkxZW1nJm09clZxcHZxc29qQVVYV3ZnNnRjZ3pRM3QyV0RhNlJtNHJXbS1BY1I2X2dlTSZzPWV5bUFGR0RmTzBpaVNucVgxME5KYmo1NDRlckVVX2kyZVlrUVJzN1Mwd3cmZT0%3D&_s=bWVodWwuYmhhdHRAb3J1LnNl&_c=bcc03888&_r=b3J1LXNl>  |  www.spatial-reasoning.com<https://mailfilter.sunet.se/canit/urlproxy.php?_q=aHR0cHM6Ly91cmxkZWZlbnNlLnByb29mcG9pbnQuY29tL3YyL3VybD91PWh0dHAtM0FfX3d3dy5zcGF0aWFscmVhc29uaW5nLmNvbV8mZD1Ed01HYVEmYz1hTG5TNlA4TmcwelNOaENGMDRPV0ltUV9IZTJMNjlzTldHM1BieGV5aWVFJnI9X2puQ1hvUTVfRTRUVUFycEM5MWVtZyZtPXJWcXB2cXNvakFVWFd2ZzZ0Y2d6UTN0MldEYTZSbTRyV20tQWNSNl9nZU0mcz0xQ3VRRWZUVFZQRHU1ZzVrLWFyNGNYSU1WSXlrOVhITGtTRHU1UzVrVWFjJmU9&_s=bWVodWwuYmhhdHRAb3J1LnNl&_c=3e1a5fdd&_r=b3J1LXNl>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20190510/080d41df/attachment.html>

More information about the visionlist mailing list