Intelligent autonomous systems hold significant promise to revolutionize industries such as transportation, agriculture, mining, construction, security and surveillance, and environmental monitoring. The effectiveness of an autonomous system in complex, unstructured environments, characteristic of these applications, depends critically on the system's ability to understand and model uncertain, dynamically changing operational conditions. Recent years have seen impressive progress in Simultaneous Localization And Mapping (SLAM), which has been instrumental for transitioning robots from the factory floor to unstructured environments. Indeed, state-of-the-art SLAM approaches can track the pose of a single camera and IMU over long trajectories in real time, while simultaneously providing an accurate dense metric reconstruction of the environment. Surprisingly, however, SLAM has advanced mostly in isolation from the recent equally impressive progress in object recognition, enabled by structured (deformable part) models and automated feature extraction via deep learning. Traditional SLAM relies on low-level geometric features and assumes specific distributional (e.g., near-linear Gaussian) models that are generally difficult to combine with discrete, semantic content. Few approaches combine spatial and semantic information, despite the tremendous scientific and practical promise of multimodal representations. The design and inference of representations that model dynamic, articulated, or deformable objects, or are neuroscience-inspired, and human-interpretable is even more compelling for expanding the capabilities of autonomous systems and bridging the gap between robot and human perception.

The goal of this workshop is to bring together researchers from robotics, computer vision, machine learning, and neuroscience to examine the challenges and opportunities emerging from the design of representations that unify semantics, geometry, physics, as well as the development of novel perception, inference, and learning algorithms that can generate such representations in a human-interpretable manner. Of particular interest are: 1) novel representations that combine spatial, semantic, and temporal information, 2) contextual inference techniques that can produce maximum likelihood estimates over such hybrid multi-modal representations, 3) learning techniques that can produce cognitive representations directly from complex sensory inputs, 4) approaches that combine learning-based techniques with geometric estimation methods, 5) position papers, practical demonstrations, and unconventional ideas on how to reach human-level performance across the broad spectrum of perceptual problems arising in robotics.

Participants are invited to submit extended abstracts (up to 5 pages, following ICRA formatting guidelines) related to key challenges in unified geometric, semantic, topological, and physical representations, and associated perception, inference, and learning algorithms. Topics of interest include but are not limited to:

Contributed papers will be reviewed by the organizers and a program committee of invited reviewers. Accepted papers will be featured through spotlight presentations, poster sessions, and live demos! Camera-ready versions of the accepted papers will be published on the workshop website and compiled into a single PDF file to serve as the workshop's proceedings. Invited speakers, contributing authors, and participants will receive a copy of the proceedings via email.

Submissions and questions should be directed to natanasov@ucsd.edu by April 10th 2018. Please include "ICRA 2018 Abstract Submission" in the subject of the email. Notifications of acceptance will be given by April 21st 2018.

Time Topic
8:45 - 9:00 Registration, welcome, and opening remarks
9:00 - 9:30 Invited talk: Dieter Fox (Confirmed)
9:30 - 10:00 Poster Spotlights
10:00 - 10:30 Coffee break
10:30 - 11:00 Invited talk: Jitendra Malik (Confirmed)
11:00 - 11:30 Invited talk: Jana Kosecka (Confirmed)
11:30 - 12:00 Morning wrap-up: panel discussion
12:00 - 1:30 Lunch break
1:30 - 2:00 Invited talk: Andrew Davison (Confirmed)
2:00 - 2:30 Invited talk: Srini Srinivasan (Confirmed)
2:30 - 3:00 Poster and demo session
3:00 - 3:30 Coffee break & Poster Session
3:30 - 4:00 Invited talk: Marc Pollefeys (Tentative)
4:00 - 4:30 Invited talk: Bart Anderson (Confirmed)
4:30 - 5:00 Afternoon wrap-up: panel discussion & closing remarks
Should you have any questions please contact the organizers via email.