Dear all,
We are delighted to announce our upcoming Robotics: Science and Systems (RSS) 2024 Workshop on Semantics for Robotics: From Environment Understanding and Reasoning to Safe Interaction, to be held at the Delft University of Technology, Delft, Netherlands, on July 15, 2024. We would like to invite you to participate and contribute your research in the form of short paper submissions. Below, you will find a description of the workshop along with paper submission details.
Summary
Workshop Overview
For robots to safely interact with people and the real world, they need the capability to not only perceive but also understand their surroundings in a semantically meaningful way (i.e., understanding implications or pertinent properties associated with the objects in the scene). Advanced perception methods coupled with learning algorithms have made significant progress in enabling semantic understanding. Recent breakthroughs in foundation models have further exposed opportunities for robots to contextually reason about their operating environments. Semantics is ingrained in every aspect of robotics, from perception to action; reliably exploiting semantic information in embodied systems requires tightly coupled perception, learning, and control algorithm design (e.g., a robot in a warehouse must recognize objects on the floor and reason whether it is safe to run over them). By organizing this workshop, we hope to foster discussions on innovative approaches that harness semantic understanding for the design and deployment of intelligent embodied systems. We aim to facilitate an interdisciplinary exchange between researchers in robot learning, perception, mapping, and control to identify the opportunities and pressing challenges when incorporating semantics into robotic applications.
Call for Papers
We are inviting researchers from different disciplines to share novel ideas and ideas on topics pertinent to the workshop themes, which include but are not limited to:
- Spatial perception methods incorporating semantic, geometric and multi-modal information into 3D mapping and state estimation algorithms
- Efficient 3D object and environment representations from multi-modal sensor inputs
- Uncertainty estimation for robust 3D perception
- Contextual reasoning of the 3D environments (e.g., object relations, affordance, traversability)
- Safe and risk-aware robot motion planning and control under geometric and/or semantic uncertainties
- Robot skill acquisition and learning leveraging semantics information
- Multi-agent collaboration through semantic information
- Demonstration or position papers on foundation-model-based perception and decision-making methods
The review process will be single-blind. Accepted papers will be published on the workshop webpage and will be presented as a spotlight talk or as a poster.
Paper Format
Important Dates (all deadlines are at 11:59 pm AoE)
- Initial Submission: May 15, 2024
- Author Notification: May 31, 2024
- Camera Ready: June 15, 2024
Submission Link
http://tiny.cc/RSS24SfR
Organizing Committee
Angela P. Schoellig, Technical University of Munich
SiQi Zhou, Technical University of Munich
Lukas Brunke, Technical University of Munich
Adam Hall, University of Toronto
Federico Pizarro Bejarano, University of Toronto
Jingxing Qian, University of Toronto
Sepehr Samavi, University of Toronto