The goal of this workshop is to bring together researchers from robotics, computer vision, machine learning, and neuroscience to examine the challenges and opportunities emerging from the design of environment representations and perception algorithms that unify semantics, geometry, and physics. This goal is motivated by two fundamental observations. First, the development of advanced perception and world understanding is a key requirement for robot autonomy in complex, unstructured environments, and an enabling technology for robot use in transportation, agriculture, mining, construction, security, surveillance, and environmental monitoring. Second, despite the unprecedented progress over the past two decades, there is still a large gap between robot and human perception (e.g., expressiveness of representations, robustness, latency). The workshop aims to bring forward the latest breakthroughs and cutting-edge research on multimodal representations, as well as novel perception, inference, and learning algorithms that can generate such representations.

The workshop is endorsed by the IEEE RAS Technical Committee for Computer & Robot Vision.

To encourage rigorous innovative submissions, this year we plan to award a monetary prize for the best paper and the best demo presented during the workshop. Quality and impact of the submissions will be evaluated by the program committee. The best workshop paper and best demo awards are sponsored by:

Participants are invited to submit a full paper (following ICRA formatting guidelines) or an extended abstract (up to 2 pages) related to key challenges in unified geometric, semantic, topological, and temporal representations, and associated perception, inference, and learning algorithms. Topics of interest include but are not limited to:

Contributed papers will be reviewed by the organizers and a program committee of invited reviewers. Accepted papers will be published on the workshop website and will be featured in spotlight presentations and poster sessions. We strongly encourage the preparation of live demos to accompany the papers. We plan to select the best submissions and invite the authors of these papers to contribute to a special issue on the IEEE Transactions on Robotics, related to the topic of the workshop.

Submission link: https://easychair.org/conferences/?conf=icramrp18

Feel free to post thought-provoking questions and ideas related to joint metric-semantic-physical perception:

Your questions and ideas will be discussed during the panel sessions.

Time Topic
8:45 - 9:00 Registration, welcome, and opening remarks
9:00 - 9:30

Invited talk: Michael Milford (Queensland University of Technology)
Adventures in multi-modal, sometimes bio-inspired perception, mapping and navigation for robots and autonomous vehicles

9:30 - 10:00

Invited talk: Jana Kosecka (George Mason University)
Semantic Understanding for Robot Perception and Navigation

10:00 - 10:30 Poster Spotlights
10:30 - 11:00 Coffee break
11:00 - 11:30 Poster Spotlights
11:30 - 12:00

Invited talk: Ian Reid (University of Adelaide)
SLAM in the Era of Deep Learning

12:00 - 12:30 Morning wrap-up: panel discussion
12:30 - 2:00 Lunch break
2:00 - 2:30

Invited talk: Dieter Fox / Arun Byravan (University of Washington)
Learning to Predict and Control Objects from Low-level Supervisionn

2:30 - 3:00

Invited talk: Srini Srinivasan (Queensland Brain Institute)
Facets of vision, perception, learning and `cognition' in a small brain

3:00 - 3:30 Coffee break & Poster Session
3:30 - 4:00 Poster and demo session
4:00 - 4:30

Invited talk: Torsten Sattler (ETH Zurich)
Challenges in Long-Term Visual Localization

4:30 - 5:00 Afternoon wrap-up: panel discussion & closing remarks
5:00 - 5:15 Award ceremony

Should you have any questions, please do not hesitate to contact the organizers Nikolay Atanasov (natanasov@ucsd.edu) or Luca Carlone (lcarlone@mit.edu). Please include ``ICRA 2018 Workshop Submission'' in the subject of the email.