UCSD ECE276B: Planning & Learning in Robotics (Winter 2018)

Time and Location

Tuesday and Thursday, 5:00-6:20 PM, in CSB 001.

Instructors

References

Overview

This course covers optimal control and reinforcement learning fundamentals and their application to planning and decision making in robotics. Topics include Markov decision processes (MDP), dynamic programming, search-based and sampling-based planning, value and policy iteration, linear quadratic regulation (LQR), and model-free reinforcement learning.

Prerequisites

Students are expected to have background in Bayesian estimation theory at the level of ECE276A: Sensing & Estimation in Robotics as well as reasonable programming experience.

Requirements

The class assignments consist of four homeworks, each including theoretical problems, programming problems (in Python), and a report. Each homework is worth 25% of the final grade.

Students are expected to sign up on Piazza and GradeScope:MWYKBB. Discussion and important announcements will be made on Piazza. The homework should be turned in and will be graded on GradeScope:MWYKBB.

Collaboration and Academic Integrity

Please note that an important element of academic integrity is fully and correctly attributing any materials taken from the work of others. You are encouraged to work with other students and to discuss the assignments in general terms (e.g., “Do you understand the A* algorithm” or “What is the update equation for Value iteration?”). However, the work you turn in should be your own – you should not split parts of the assignments with other students and you should certainly not copy other students’ code or papers. All projects in this course are individual assignments. More generally, please familiarize yourself with UCSD's Code of Academic Integrity, which applies to this course. Instances of academic dishonesty will be referred to the Office of Student Conduct for adjudication.