<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div class=""><b class="" style="font-family: HelveticaNeue;"><span class="" style="font-weight: normal;">//////////////////////////////////////////////////////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">///</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">///</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">///</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">///</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">////</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">//</span><span class="" style="font-weight: normal;">/</span><span class="" style="font-weight: normal;">//</span></b></div><div class=""><b class="" style="font-family: HelveticaNeue;">CELEBRATING SEMANTICS   —   August 2021 to February 2022</b></div><div class="" style="font-family: HelveticaNeue;"><font face="HelveticaNeue" class="">   >   IJCAI (Montreal)  —  ACAI (Berlin)</font><span class=""> </span><span class=""> —  </span><span class="">IROS (Prague)</span><span class=""> </span><span class=""> —  </span><span class="">Spatial Cognition (Riga)</span><span class=""> </span><span class=""> —  </span><span class="">RAS (Elsevier)</span></div><div class="" style="font-family: HelveticaNeue;"><span class="">//////////////////////////////////////////////////////</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span></div><div class="" style="font-family: HelveticaNeue;"><b class=""><br class=""></b></div><div class="" style="font-family: HelveticaNeue;"><b class="">>  Tutorial:  “Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing” </b></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">        </span>@ </font><span class="">IJCAI 2021 - International Joint Conference on Artificial Intelligence (Canada) - August 2021</span></div><div class="" style="font-family: HelveticaNeue;"><span class=""><span class="Apple-tab-span" style="white-space: pre;">      </span>@ ACAI 2021 - Advanced Course on Artificial Intelligence (Germany) - October 2021</span></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><span class=""><b class="">>  Tutorial:  “Spatial Cognition and Artificial Intelligence: </b></span><b class="">Methods for In-The-Wild Behavioural Research in Visual Perception</b><b class="">" </b></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><span class=""><span class="Apple-tab-span" style="white-space: pre;">      </span>@ Spatial Cognition 2020-1 (Latvia) - August 2021</span></div><div class="" style="font-family: HelveticaNeue;"></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><font face="HelveticaNeue" class=""><b class="">>  Workshop:  </b><span class=""><b class="">``</b></span></font><b class="">Semantic Policy and Action Representation</b><b class="">''</b></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">     </span>@ </font><span class=""> IROS 2021- 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) - September 2021 </span></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><font face="HelveticaNeue" class=""><b class="">>  RAS Special Issue:  </b><span class=""><b class="">``</b></span></font><b class="">Semantic Policy and Action Representation</b><b class="">''</b></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><span class="Apple-tab-span" style="white-space: pre;">    </span>@  Journal of Robotics and Automation Systems (Elsevier) - December 2021 to Feb 2022</div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><i class="">Details below, and also via</i>:</div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><b class="">CoDesign Lab EU</b>  /  Cognition. AI. Interaction. Design. </div><div class="" style="font-family: HelveticaNeue;"><a href="https://codesign-lab.org/2021.html" class="">https://codesign-lab.org/2021.html</a></div><div class="" style="font-family: HelveticaNeue;"><span class=""><br class=""></span></div><div class="" style="font-family: HelveticaNeue;"><span class="">//////////////////////////////////////////////////////</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">/</span><span class="">/</span><span class="">/</span><span class="">///</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">////</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span><span class="">//</span><span class="">/</span><span class="">//</span></div><div class="" style="font-family: HelveticaNeue;"><span class=""><br class=""></span></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;">==============================================================================</div><div class="" style="font-family: HelveticaNeue;"><b class="">TUTORIAL:  COGNITIVE VISION   /   IJCAI 2021.  ACAI 2021.</b></div><div class="" style="font-family: HelveticaNeue;">==============================================================================</div><div class="" style="font-family: HelveticaNeue;"><br class=""><b class="">@</b> International Joint Conference on Artificial Intelligence (IJCAI 2021)<br class="">Montreal, Canada — August 21 to 26, 2021<br class=""><br class=""><b class="">@</b> ACAI 2021 - Advanced Course on Artificial Intelligence </div><div class="" style="font-family: HelveticaNeue;">Berlin, Germany — October 11-15, 2021<br class=""><br class=""><br class=""><b class="">Cognitive Vision:  On Deep Semantics for Explainable Visuospatial Computing</b><br class=""><br class=""><b class=""><br class=""></b></div><div class="" style="font-family: HelveticaNeue;"><b class="">About.  </b>The tutorial on cognitive vision addresses computational vision and perception at the interface of language, logic, cognition, and artificial intelligence. The tutorial focusses on application areas where the processing and explainable semantic interpretation of (potentially large volumes of) dynamic visuospatial imagery is central, e.g., for commonsense scene understanding; visual cognition for cognitive robotics / HRI, autonomous driving; narrative interpretation from the viewpoints of visuoauditory perception & digital media design, semantic interpretation of multimodal human-behavioural data.</div><br class="" style="font-family: HelveticaNeue;"><span class="" style="font-family: HelveticaNeue;">The tutorial highlights Deep (Visuospatial) Semantics, denoting the existence of systematically formalised declarative AI methods --e.g., pertaining to reasoning about space and motion-- supporting semantic (visual) question-answering, relational learning, non-monotonic (visuospatial) abduction, and simulation of embodied interaction. The tutorial demonstrates the integration of methods from knowledge representation and computer vision with a focus on (combining) reasoning & learning about space, action, motion, and interaction. This is presented in the backdrop of areas as diverse as autonomous driving, cognitive robotics, eye-tracking driven visual perception research (e.g., for visual art, architecture design, cognitive film studies), and psychology & behavioural research domains where data-centred analytical methods are gaining momentum.  The tutorial covers both applications and basic methods concerned with topics such as: explainable visual perception, semantic video understanding, language generation from video, declarative spatial reasoning, and computational models of narrative. The tutorial will position an emerging line of research that brings together a novel \& unique combination of research methodologies, academics, and communities encompassing AI, ML, Vision, Cognitive Linguistics, Psychology, Visual Perception, and Spatial Cognition and Computation.</span><br class="" style="font-family: HelveticaNeue;"><br class="" style="font-family: HelveticaNeue;"><div class="" style="font-family: HelveticaNeue;"><br class="">Tutorial Presenters:<br class=""><br class=""></div><div class="" style="font-family: HelveticaNeue;">—  Mehul Bhatt (Örebro University, Sweden)</div><div class="" style="font-family: HelveticaNeue;">—  Jakob Suchan (University of Bremen, Germany)<br class=""><br class=""><br class=""></div><div class="" style="font-family: HelveticaNeue;">Tutorial Info  >  <a href="https://codesign-lab.org/cognitive-vision/" class="">https://codesign-lab.org/cognitive-vision/</a></div><div class="" style="font-family: HelveticaNeue;"><span class="Apple-tab-span" style="white-space: pre;">   </span>IJCAI 2021  /  <a href="https://ijcai-21.org/" class="">https://ijcai-21.org</a></div><div class="" style="font-family: HelveticaNeue;"><span class="Apple-tab-span" style="white-space: pre;">        </span>ACAI 2021  /  <a href="https://www.humane-ai.eu/event/acai2021" class="">https://www.humane-ai.eu/event/acai2021</a></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;">=====================================================================</div><div class="" style="font-family: HelveticaNeue;"><b class="">TUTORIAL:  SPATIAL COGNITION AND AI</b><b class="">   /   </b><b class="">Spatial Cognition 2020-1</b></div><div class="" style="font-family: HelveticaNeue;">=====================================================================</div><div class="" style="font-family: HelveticaNeue;"><b class=""><br class=""></b></div><div class="" style="font-family: HelveticaNeue;"><span class="">@</span><b class=""> </b>Spatial Cognition Conference 2020/1<br class="">Riga, Latvia — August 1 to 4, 2021<br class=""><br class=""><br class=""></div><div class="" style="font-family: HelveticaNeue;"><b class="">Spatial Cognition and Artificial Intelligence:</b> <b class="">Methods for In-The-Wild Behavioural Research in Visual Perception</b><br class=""><br class=""></div><div class="" style="font-family: HelveticaNeue;"><b class="">About.  </b>The tutorial on “Spatial Cognition and Artificial Intelligence” addresses the confluence of empirically based behavioural research in the cognitive and psychological sciences with computationally driven analytical methods rooted in artificial intelligence and machine learning. This confluence is addressed in the backdrop of human behavioural research concerned with “in-the-wild” naturalistic embodied multimodal interaction. The tutorial presents:</div><div class="" style="font-family: HelveticaNeue;"><br class=""><div class=""><span class="Apple-tab-span" style="white-space: pre;">   </span>• an interdisciplinary perspective on conducting evidence-based (possibly large-scale) human behaviour research from the viewpoints of visual perception, environmental psychology, and spatial cognition.<br class=""></div><div class=""><br class=""></div><div class=""><span class="Apple-tab-span" style="white-space: pre;">        </span>• artificial intelligence methods for the semantic interpretation of embodied multimodal interaction (e.g., rooted in behavioural data), and the (empirically driven) synthesis of interactive embodied cognitive experiences in real-world settings relevant to both everyday life as well to professional creative-technical spatial thinking.<br class=""></div><div class=""><br class=""></div><div class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>• the relevance and impact of research in cognitive human-factors (e.g., in spatial cognition) for the design and implementation of next-generation human-centred AI technologies<br class=""></div><div class=""><br class=""></div>Keeping in mind an interdisciplinary audience, the focus of the tutorial is to provide a high-level demonstration of the potential of general AI-based computational methods and tools that can be used for multimodal human behavioral studies concerned with visuospatial, visuo-locomotive, and visuo-auditory cognition in everyday and specialized visuospatial problem solving. Presented methods are rooted in foundational research in artificial intelligence, spatial cognition and computation, spatial informatics, human- computer interaction, and design science. We highlight practical examples involving the analysis and synthesis of human cognitive experiences in the context of application areas such as (evidence-based) architecture and built environment design, narrative media design, product design, and visual sensemaking in autonomous cognitive systems (e.g., social robotics, autonomous vehicles).</div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><div class=""><br class=""></div><div class="">Tutorial Presenters:</div><div class=""><br class=""><span class="Apple-tab-span" style="white-space: pre;">    </span>—  Mehul Bhatt (Örebro University, Sweden)</div><div class=""><span class="Apple-tab-span" style="white-space: pre;">      </span>—  Jakob Suchan (University of Bremen, Germany)<span class="Apple-tab-span" style="white-space: pre;">    </span></div><div class=""><span class="Apple-tab-span" style="white-space: pre;">        </span>—  Vasiliki Kondyli (Örebro University, Sweden)</div><div class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>—  Vipul Nair (University of Skövde, Sweden) <br class=""></div></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;">Tutorial Info  >  <a href="http://sc2020.lu.lv/satellite-events/tutorial-spatial-cognition-and-artificial-intelligence-methods-for-in-the-wild-behavioural-research-in-visual-perception/" class="">http://sc2020.lu.lv/satellite-events/tutorial-spatial-cognition-and-artificial-intelligence-methods-for-in-the-wild-behavioural-research-in-visual-perception/</a></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;">=====================================================================</div><div class="" style="font-family: HelveticaNeue;"><b class="">WORKSHOP:  </b><b class="">Semantic Policy and Action Representation   </b><b class="">/   IROS 2021</b></div><div class="" style="font-family: HelveticaNeue;">=====================================================================</div><div class="" style="font-family: HelveticaNeue;"><br class=""><b class="">@</b> IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)<br class="">Prague, Czech Republic. September 27, 2021<br class=""><br class=""><b class="">5th International Workshop on:<br class="">Semantic Policy and Action Representation for Autonomous Robots (SPAR)<br class=""></b><br class=""><br class="">Workshop Chairs:<br class=""><br class=""></div><div class="" style="font-family: HelveticaNeue;"><span class="Apple-tab-span" style="white-space: pre;">     </span>—  Chris Paxton (NVIDIA, United States)<br class=""><span class="Apple-tab-span" style="white-space: pre;">       </span>—  Karinne Ramirez-Amaro (Chalmers, Sweden)<br class=""><span class="Apple-tab-span" style="white-space: pre;">   </span>—  Jesse Thomason (University of Southern California, United States)<br class=""><span class="Apple-tab-span" style="white-space: pre;">  </span>—  Maria Eugenia Cabrera (University of Washington, United States)<br class=""><span class="Apple-tab-span" style="white-space: pre;">    </span>—  Mehul Bhatt (Örebro University, Sweden)<br class=""><br class=""><br class=""><b class="">About.  </b>In this full-day workshop, we aim to discussion two main questions:<br class=""><br class="">—  How can we learn scalable and general semantic representations? In recent years, there has been a substantial contribution in semantic policy and action representation in the fields of robotics, computer vision, and machine learning. In this respect, we would like to invite experts in academia and motivate them to comment on the recent advances in semantic reasoning by addressing the problem of linking continuous sensory experiences and symbolic constructions to couple perception and execution of actions. In particular, we want to explore how these can make robot learning more scalable and generalizable to new tasks and environments. <br class=""><br class="">—  How can semantic information be used to create Explainable AI? We would like to invite researchers from a broad range of areas including task and motion planning, language learning, general-purpose machine learning, and human-robot interaction. Much of action semantics is definitionally tied to how robots and humans communicate, and one fundamental feature of these approaches should be that they allow a broad variety of people to benefit from advances in robotics, and to work alongside robots outside of laboratory environments. Building more understandable action representations is important as a way of building robotic systems that benefit society.<br class=""><br class="">Call for Papers  >  <a href="https://sites.google.com/view/spar-2021/" class="">https://sites.google.com/view/spar-2021/</a></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><br class=""></div><div class="" style="font-family: HelveticaNeue;"><div class="">=====================================================================</div><div class=""><b class="">SPECIAL ISSUE:  </b><b class="">Semantic Policy and Action Representation   </b><b class="">/   RAS (Elsevier)</b></div><div class="">=====================================================================</div><div class=""><br class=""></div><div class="">@ Journal of Robotics and Automation Systems (Elsevier) </div><div class="">Submission Window: December 2021 to Feb 2022</div><div class=""><br class=""></div><div class=""><font face="HelveticaNeue" class=""><b class="">About RAS.  </b>The journal of Robotics and Autonomous Systems (RAS) focusses on research on fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.</font></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><b class="">Call</b>.  We solicit original research contributions as part of the upcoming RAS special issue directly addressing the scientific scope of the SPAR workshop (see above; </font><a href="https://sites.google.com/view/spar-2021/" class="">https://sites.google.com/view/spar-2021/</a> )<span class="">. Please note that submissions to the special issue remain open to all interested </span><span class="">contributors;</span><span class=""> participation / presentation in the SPAR workshop is not a prerequisite for submitting a paper for the special issue.</span></div><div class=""><br class=""></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><b class="">Key Topics of interest</b>:</font></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">   </span>• Task and Motion Planning<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">    </span>• Explainable and Interpretable Robot Decision-Making  methods<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">   </span>• Active and Context-based Vision<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">     </span>• Cognitive Vision and Perception - Semantic Representations<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">  </span>• Commonsense reasoning about space and motion (e.g., for policy learning)<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">    </span>• Task-oriented and Perception-informed Language Grounding <br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">      </span>• Task and Environment Semantics<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">      </span>• Robot Learning from Demonstration and Exploration</font></div></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><font face="HelveticaNeue" class=""><b class="">Applicable Dates</b>:<br class=""><br class=""></font><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">    </span>• Paper submissions open (through Elsevier system): Dec 1 2021<br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><span class="Apple-tab-span" style="white-space: pre;">        </span>• Final paper submission deadline: Feb 15 2022<br class=""></font></div><font face="HelveticaNeue" class=""><br class="">Reviews of submitted papers will commence as the papers are submitted. Earlier submissions may expect an overall quick turn-around time. </font></div><div class=""><font face="HelveticaNeue" class="">As a worst case, we expect all accepted publications to be published in 2022.</font></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font></div><div class=""><font face="HelveticaNeue" class=""><b class="">Guest Editors</b>:</font></div><div class=""><font face="HelveticaNeue" class=""><br class=""></font></div><div class=""><span class="Apple-tab-span" style="white-space: pre;">   </span><span class="">—  </span>Karinne Ramirez-Amaro (Chalmers, Sweden)</div><div class=""><span class="Apple-tab-span" style="white-space: pre;">  </span><span class="">—  </span>Chris Paxton (NVIDIA, United States)<br class=""><span class="Apple-tab-span" style="white-space: pre;">   </span><span class="">—  Jesse Thomason (University of Southern California, United States)</span><br class=""><span class="Apple-tab-span" style="white-space: pre;">      </span><span class="">—  Maria Eugenia Cabrera (University of Washington, United States)</span><br class=""><span class="Apple-tab-span" style="white-space: pre;">        </span><span class="">—  Mehul Bhatt (Örebro University, Sweden)</span><br class=""></div><div class=""><span class=""><br class=""></span></div><div class=""><span class="">www   >   </span><a href="https://sites.google.com/view/spar-2021/special-issue" class="">https://sites.google.com/view/spar-2021/special-issue</a></div><div class="">@RAS (Elsevier)   >   <a href="https://www.journals.elsevier.com/robotics-and-autonomous-systems/call-for-papers/semantic-policy-and-action-representations-for-autonomous-ro" class="">https://www.journals.elsevier.com/robotics-and-autonomous-systems/call-for-papers/semantic-policy-and-action-representations-for-autonomous-ro</a></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><div class="" style="font-family: HelveticaNeue;">============================================================================================================================</div><div class=""><b class="">CoDesign Lab EU</b>  /  <a href="https://codesign-lab.org/" class="">https://codesign-lab.org</a>  —  <a href="mailto:info@codesign-lab.org" class="">info@codesign-lab.org</a></div><div class="">Direct contact / Mehul Bhatt ( <a href="mailto:mehul.bhatt@oru.se" class="">mehul.bhatt@oru.se</a> )</div><div class=""><span class="" style="font-family: HelveticaNeue;">============================================================================================================================</span></div><div class=""><i class="">[ Sincere a<span class="" style="font-family: HelveticaNeue;">pologies for cross-postings. We appreciate your help in disseminating this message further in your network. </span>]</i></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div></div></div></div></div></div></div></div></div></div></div></body></html>