[visionlist] Celebrating Semantics >> IJCAI (Montreal) / ACAI (Berlin) / IROS (Prague) / Spatial Cognition (Riga) / RAS (Elsevier).
Mehul Bhatt
mehul.bhatt at oru.se
Wed Jul 7 02:59:24 -04 2021
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CELEBRATING SEMANTICS — August 2021 to February 2022
> IJCAI (Montreal) — ACAI (Berlin) — IROS (Prague) — Spatial Cognition (Riga) — RAS (Elsevier)
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
> Tutorial: “Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing”
@ IJCAI 2021 - International Joint Conference on Artificial Intelligence (Canada) - August 2021
@ ACAI 2021 - Advanced Course on Artificial Intelligence (Germany) - October 2021
> Tutorial: “Spatial Cognition and Artificial Intelligence: Methods for In-The-Wild Behavioural Research in Visual Perception"
@ Spatial Cognition 2020-1 (Latvia) - August 2021
> Workshop: ``Semantic Policy and Action Representation''
@ IROS 2021- 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) - September 2021
> RAS Special Issue: ``Semantic Policy and Action Representation''
@ Journal of Robotics and Automation Systems (Elsevier) - December 2021 to Feb 2022
Details below, and also via:
CoDesign Lab EU / Cognition. AI. Interaction. Design.
https://codesign-lab.org/2021.html <https://codesign-lab.org/2021.html>
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
==============================================================================
TUTORIAL: COGNITIVE VISION / IJCAI 2021. ACAI 2021.
==============================================================================
@ International Joint Conference on Artificial Intelligence (IJCAI 2021)
Montreal, Canada — August 21 to 26, 2021
@ ACAI 2021 - Advanced Course on Artificial Intelligence
Berlin, Germany — October 11-15, 2021
Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing
About. The tutorial on cognitive vision addresses computational vision and perception at the interface of language, logic, cognition, and artificial intelligence. The tutorial focusses on application areas where the processing and explainable semantic interpretation of (potentially large volumes of) dynamic visuospatial imagery is central, e.g., for commonsense scene understanding; visual cognition for cognitive robotics / HRI, autonomous driving; narrative interpretation from the viewpoints of visuoauditory perception & digital media design, semantic interpretation of multimodal human-behavioural data.
The tutorial highlights Deep (Visuospatial) Semantics, denoting the existence of systematically formalised declarative AI methods --e.g., pertaining to reasoning about space and motion-- supporting semantic (visual) question-answering, relational learning, non-monotonic (visuospatial) abduction, and simulation of embodied interaction. The tutorial demonstrates the integration of methods from knowledge representation and computer vision with a focus on (combining) reasoning & learning about space, action, motion, and interaction. This is presented in the backdrop of areas as diverse as autonomous driving, cognitive robotics, eye-tracking driven visual perception research (e.g., for visual art, architecture design, cognitive film studies), and psychology & behavioural research domains where data-centred analytical methods are gaining momentum. The tutorial covers both applications and basic methods concerned with topics such as: explainable visual perception, semantic video understanding, language generation from video, declarative spatial reasoning, and computational models of narrative. The tutorial will position an emerging line of research that brings together a novel \& unique combination of research methodologies, academics, and communities encompassing AI, ML, Vision, Cognitive Linguistics, Psychology, Visual Perception, and Spatial Cognition and Computation.
Tutorial Presenters:
— Mehul Bhatt (Örebro University, Sweden)
— Jakob Suchan (University of Bremen, Germany)
Tutorial Info > https://codesign-lab.org/cognitive-vision/ <https://codesign-lab.org/cognitive-vision/>
IJCAI 2021 / https://ijcai-21.org <https://ijcai-21.org/>
ACAI 2021 / https://www.humane-ai.eu/event/acai2021 <https://www.humane-ai.eu/event/acai2021>
=====================================================================
TUTORIAL: SPATIAL COGNITION AND AI / Spatial Cognition 2020-1
=====================================================================
@ Spatial Cognition Conference 2020/1
Riga, Latvia — August 1 to 4, 2021
Spatial Cognition and Artificial Intelligence: Methods for In-The-Wild Behavioural Research in Visual Perception
About. The tutorial on “Spatial Cognition and Artificial Intelligence” addresses the confluence of empirically based behavioural research in the cognitive and psychological sciences with computationally driven analytical methods rooted in artificial intelligence and machine learning. This confluence is addressed in the backdrop of human behavioural research concerned with “in-the-wild” naturalistic embodied multimodal interaction. The tutorial presents:
• an interdisciplinary perspective on conducting evidence-based (possibly large-scale) human behaviour research from the viewpoints of visual perception, environmental psychology, and spatial cognition.
• artificial intelligence methods for the semantic interpretation of embodied multimodal interaction (e.g., rooted in behavioural data), and the (empirically driven) synthesis of interactive embodied cognitive experiences in real-world settings relevant to both everyday life as well to professional creative-technical spatial thinking.
• the relevance and impact of research in cognitive human-factors (e.g., in spatial cognition) for the design and implementation of next-generation human-centred AI technologies
Keeping in mind an interdisciplinary audience, the focus of the tutorial is to provide a high-level demonstration of the potential of general AI-based computational methods and tools that can be used for multimodal human behavioral studies concerned with visuospatial, visuo-locomotive, and visuo-auditory cognition in everyday and specialized visuospatial problem solving. Presented methods are rooted in foundational research in artificial intelligence, spatial cognition and computation, spatial informatics, human- computer interaction, and design science. We highlight practical examples involving the analysis and synthesis of human cognitive experiences in the context of application areas such as (evidence-based) architecture and built environment design, narrative media design, product design, and visual sensemaking in autonomous cognitive systems (e.g., social robotics, autonomous vehicles).
Tutorial Presenters:
— Mehul Bhatt (Örebro University, Sweden)
— Jakob Suchan (University of Bremen, Germany)
— Vasiliki Kondyli (Örebro University, Sweden)
— Vipul Nair (University of Skövde, Sweden)
Tutorial Info > http://sc2020.lu.lv/satellite-events/tutorial-spatial-cognition-and-artificial-intelligence-methods-for-in-the-wild-behavioural-research-in-visual-perception/ <http://sc2020.lu.lv/satellite-events/tutorial-spatial-cognition-and-artificial-intelligence-methods-for-in-the-wild-behavioural-research-in-visual-perception/>
=====================================================================
WORKSHOP: Semantic Policy and Action Representation / IROS 2021
=====================================================================
@ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)
Prague, Czech Republic. September 27, 2021
5th International Workshop on:
Semantic Policy and Action Representation for Autonomous Robots (SPAR)
Workshop Chairs:
— Chris Paxton (NVIDIA, United States)
— Karinne Ramirez-Amaro (Chalmers, Sweden)
— Jesse Thomason (University of Southern California, United States)
— Maria Eugenia Cabrera (University of Washington, United States)
— Mehul Bhatt (Örebro University, Sweden)
About. In this full-day workshop, we aim to discussion two main questions:
— How can we learn scalable and general semantic representations? In recent years, there has been a substantial contribution in semantic policy and action representation in the fields of robotics, computer vision, and machine learning. In this respect, we would like to invite experts in academia and motivate them to comment on the recent advances in semantic reasoning by addressing the problem of linking continuous sensory experiences and symbolic constructions to couple perception and execution of actions. In particular, we want to explore how these can make robot learning more scalable and generalizable to new tasks and environments.
— How can semantic information be used to create Explainable AI? We would like to invite researchers from a broad range of areas including task and motion planning, language learning, general-purpose machine learning, and human-robot interaction. Much of action semantics is definitionally tied to how robots and humans communicate, and one fundamental feature of these approaches should be that they allow a broad variety of people to benefit from advances in robotics, and to work alongside robots outside of laboratory environments. Building more understandable action representations is important as a way of building robotic systems that benefit society.
Call for Papers > https://sites.google.com/view/spar-2021/ <https://sites.google.com/view/spar-2021/>
=====================================================================
SPECIAL ISSUE: Semantic Policy and Action Representation / RAS (Elsevier)
=====================================================================
@ Journal of Robotics and Automation Systems (Elsevier)
Submission Window: December 2021 to Feb 2022
About RAS. The journal of Robotics and Autonomous Systems (RAS) focusses on research on fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Call. We solicit original research contributions as part of the upcoming RAS special issue directly addressing the scientific scope of the SPAR workshop (see above; https://sites.google.com/view/spar-2021/ <https://sites.google.com/view/spar-2021/> ). Please note that submissions to the special issue remain open to all interested contributors; participation / presentation in the SPAR workshop is not a prerequisite for submitting a paper for the special issue.
Key Topics of interest:
• Task and Motion Planning
• Explainable and Interpretable Robot Decision-Making methods
• Active and Context-based Vision
• Cognitive Vision and Perception - Semantic Representations
• Commonsense reasoning about space and motion (e.g., for policy learning)
• Task-oriented and Perception-informed Language Grounding
• Task and Environment Semantics
• Robot Learning from Demonstration and Exploration
Applicable Dates:
• Paper submissions open (through Elsevier system): Dec 1 2021
• Final paper submission deadline: Feb 15 2022
Reviews of submitted papers will commence as the papers are submitted. Earlier submissions may expect an overall quick turn-around time.
As a worst case, we expect all accepted publications to be published in 2022.
Guest Editors:
— Karinne Ramirez-Amaro (Chalmers, Sweden)
— Chris Paxton (NVIDIA, United States)
— Jesse Thomason (University of Southern California, United States)
— Maria Eugenia Cabrera (University of Washington, United States)
— Mehul Bhatt (Örebro University, Sweden)
www > https://sites.google.com/view/spar-2021/special-issue <https://sites.google.com/view/spar-2021/special-issue>
@RAS (Elsevier) > https://www.journals.elsevier.com/robotics-and-autonomous-systems/call-for-papers/semantic-policy-and-action-representations-for-autonomous-ro <https://www.journals.elsevier.com/robotics-and-autonomous-systems/call-for-papers/semantic-policy-and-action-representations-for-autonomous-ro>
============================================================================================================================
CoDesign Lab EU / https://codesign-lab.org <https://codesign-lab.org/> — info at codesign-lab.org <mailto:info at codesign-lab.org>
Direct contact / Mehul Bhatt ( mehul.bhatt at oru.se <mailto:mehul.bhatt at oru.se> )
============================================================================================================================
[ Sincere apologies for cross-postings. We appreciate your help in disseminating this message further in your network. ]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20210707/4184e6b2/attachment-0001.html>
More information about the visionlist
mailing list