30.508 Optimization and Control

Static and dynamic optimization. Constrained and unconstrained optimal control. Method of Lagrange Multipliers; minimization of a function subject to algebraic constraints. Calculus of Variations; minimization of a functional subject to differential, integral and terminal constraints. Pontryagin’s Maximum Principle; optimization under control constraints, locally optimal feed-forward controllers, bang-bang control, time-optimal control and singular optimal control. Bellman’s Dynamic Programming; Hamilton-Jacobi-Bellman equation and globally optimal feedback controllers. Linear Quadratic Regulators; Riccati equation for time-varying and time-invariant systems. Numerical methods for solving non-linear optimal control problems.

Learning Objectives

By the end of the course, students will be able to:

  1. Understand and appreciate the analogy between differentiation and variation.
  2. Understand and appreciate the analogy between minimizations of functions and functionals.
  3. Be able to formulate unconstrained and constrained optimization problems.
  4. Be able to derive first order optimality conditions for unconstrained and constrained static and dynamic optimization problems.
  5. Demonstrate understanding of Pontryagin’s Maximum Principle and Bellman’s Dynamic Programming; be able to find locally optimal feed-forward and globally optimal feed-back controllers.
  6. Understand the working principle of the numerical methods introduced to solve non-linear optimal control problems and be able to apply these to solve practical engineering problems.
12 Credits
Instructor

Components

Final exam, Mid-term, Projects, Assignments, Participation

Image Credit