[visionlist] [Meetings] CfP: RSS 2023 Workshop on Robot Representations For Scene Understanding, Reasoning and Planning

Julian Förster julian.foerster at mavt.ethz.ch
Wed May 10 04:53:10 -04 2023


Dear colleagues,

We are happy to announce our RSS 2023 workshop titled "*Robot 
Representations For Scene Understanding, Reasoning and Planning*“, 
scheduled for July 10 in Daegu, Republic of Korea.

We invite contributions (extended abstracts or short papers) focusing on 
novel advances in 3D scene understanding, predicate/affordance 
reasoning, high-level planning and at the boundary between these 
research areas.

*Workshop website*: 
https://mit-spark.github.io/robotRepresentations-RSS2023/
*Submission site*: 
https://cmt3.research.microsoft.com/robrepworkshop2023/Submission/Index
*Submission deadline*: May 22, 2023, anywhere on earth
*Acceptance notification*: June 16

For more details, see below, visit the workshop website, or contact 
Julian at fjulian at ethz.ch.

Kind regards,
Jen Jen Chung, Luca Carlone, Federico Tombari, Julian Förster

———————————————————

*Abstract*
Robots now have advanced perception, navigation, grasping and 
manipulation capabilities, but how come it’s still exceedingly difficult 
to bring these skills together to get a robot to autonomously tidy a 
room? A key limiting factor is that robots still lack the contextual 
scene understanding capabilities that allow humans to efficiently and 
compactly reason about our world and our actions within it. Metric 
(where) and semantic (what) representations are now common, but 
contextual (how) representations–how do objects interrelate and how can 
a robot interact with objects to achieve the task?–are still missing. 
How should we formulate these representations, and crucially, how can we 
allow robots–embodied agents–learn and update their contextual scene 
understanding from live experiences? Researchers in AI knowledge 
representation and reasoning as well as in the more distant field of 
linguistics have long grappled with similar questions. The goal of this 
workshop is to bring together those experts with researchers in the 
fields of robot scene understanding and long-horizon planning to discuss 
the state of the art and uncover synergies across the currently 
disparate disciplines.

*Speakers*
Shuran Song (Columbia University), Jiayuan Mao (MIT), Janet Wiles (The 
University of Queensland), Manolis Savva (Simon Fraser 
University), Rajat Talak (MIT), Helisa Dhamo (Huawei)

*Call for papers*
Participants are invited to submit an extended abstract or short papers 
(up to 4 pages in RSS format) focusing on novel advances in 3D scene 
understanding, predicate/affordance reasoning, high-level planning and 
at the boundary between these research areas. Topics of interest include 
but are not limited to:
- Novel algorithms for spatial perception that combine geometry, 
semantics, and context;
- Approaches to learning and structuring contextual knowledge from 
complex sensory inputs;
- Techniques for reasoning over spatial, semantic, and temporal aspects 
for long-horizon planning;
- Approaches that combine learning-based techniques with geometric and 
model-based estimation methods; and
- Position papers and unconventional ideas on how to reach human-level 
performance in robot scene understanding, task planning and execution.
Contributed papers will be reviewed by the organizers and a program 
committee of invited reviewers. Accepted papers will be published on the 
workshop website and will be featured in spotlight presentations and 
poster sessions.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20230510/ad7cb631/attachment.html>


More information about the visionlist mailing list