Examples of optimal control problems

Authors
Affiliations

Marco Frego

Libera Università di Bolzano

Enrico Bertolazzi

University of Trento, Department of Industrial Engineering

The Brachistochrone Problem

One of the earliest and most renowned challenges in the calculus of variations is the brachistochrone problem, originally posed by Johann Bernoulli in 1696. This problem can be approached from various perspectives, particularly through the lenses of calculus of variations and optimal control theory, which involve ordinary and differential algebraic equations (ODEs and DAEs), as well as the principles of energy conservation.

Problem Statement

The goal of the brachistochrone problem is to determine the path of minimum time that connects two points under the influence of gravity alone. Specifically, we consider motion in two dimensions, transitioning from a starting point A to a fixed endpoint B. The challenge is to find the shape of the curve along which a particle, subject solely to gravitational force, travels from A to B in the least amount of time.

To facilitate analysis, we can establish a coordinate system with the origin at point A. In this system, the vertical axis y points upward (opposite to the direction of gravitational force), while the horizontal axis x is perpendicular to y, ensuring that the x-coordinate of point B is positive. Given that the optimal path \gamma must be a continuous curve without loops, we can represent \gamma as a function y(x).

For simplicity in calculations, we assume that y > 0, which eliminates complications arising from the negative direction of gravity and allows for straightforward interpretation of heights. By applying the principle of conservation of energy, we can relate the height y, gravitational acceleration g, and particle velocity v:

mgy = \frac{1}{2} mv^2 \quad \implies \quad v(y(x)) = \sqrt{2gy(x)}.

This relationship indicates that the solution is independent of the mass m.

Path Length and Travel Time

The distance L traveled by the particle as it approaches endpoint B can be expressed using a line integral:

L(y,x) = \oint_{\gamma} \mathrm{d}s = \int_{0}^{x_B} \sqrt{1 + (y'(x))^2} \,\mathrm{d}x.

The time taken to traverse this path is given by:

T(y,x) = \int_{0}^{x_B} \frac{\sqrt{1 + (y'(x))^2}}{\sqrt{2gy(x)}}\,\mathrm{d}x. \tag{1}

Constraints

The solution to this problem involves minimizing the functional defined in equation Equation 1 under the following constraints:

y(0) = 0, \qquad y(x_B) = -y_B. \tag{2}

Solution Using Calculus of Variations

To solve the brachistochrone problem using calculus of variations, we employ the Euler-Lagrange equation. Since the functional

T(y,x) = \int F(y,x) \,\mathrm{d}x

does not explicitly depend on x, the Euler-Lagrange equation simplifies to Beltrami’s identity:

\frac{\partial F}{\partial y} - \frac{\mathrm{d}}{\mathrm{d}x}\frac{\partial F}{\partial y'} = 0 \quad \implies \quad F - y' \frac{\partial F}{\partial y'} = c, \tag{3}

where c is a constant determined by the constraints given in equation Equation 2.

For the brachistochrone problem, we can express F as:

\sqrt{2g}T = \int_{0}^{x_B} \sqrt{\frac{1+y'(x)^2}{y(x)}} \,\mathrm{d}x \quad \implies \quad F(y,x) = \sqrt{\frac{1+y'(x)^2}{y(x)}}.

Thus, the Euler-Lagrange equation Equation 3 reduces to:

\sqrt{\frac{1+y'^2}{y}} - \frac{y'^2}{\sqrt{y(1+y'^2)}} = c \quad \implies \quad y(1+y'^2) = \frac{1}{c^2} = C > 0.

This results in an autonomous nonlinear differential equation that can be solved using separation of variables:

\int_{y_A}^{y_B} \sqrt{\frac{y}{C-y}}\,\mathrm{d}y = \int_{x_A}^{x_B} 1\,\mathrm{d}x.

To compute this integral, we use the substitution y = C \left(\sin\frac{\phi}{2}\right)^2, which gives us the differential

\mathrm{d}y = \sin \frac{\phi}{2} \cos \frac{\phi}{2} \,\mathrm{d}\phi

Thus, we have: \begin{aligned} \int \sqrt{\frac{y}{C-y}}\,\mathrm{d}y &= C \int \sqrt{\frac{ \left(\sin\frac{\phi}{2}\right)^2 }{1-\left(\sin\frac{\phi}{2}\right)^2}} \sin \frac{\phi}{2} \cos \frac{\phi}{2} \,\mathrm{d}\phi \\ &= C \int \left(\sin\frac{\phi}{2}\right)^2\mathrm{d}\phi \\ &= \frac{C}{2}\left(\phi - \sin\phi\right) + k, \end{aligned}

where k is the integration constant.

Evaluating this integral at the extrema defined by the constraints in equation Equation 2 allows us to determine the constants C and k. Since the starting point A coincides with the origin, we have k = 0. The constant C depends on the coordinates of point B = (x_B, y_B). Specifically, we find:

\begin{aligned} x_B &= \frac{C}{2}\left(\phi_B - \sin\phi_B\right), \\ y_B &= -\frac{C}{2}(1 - \cos\phi_B). \end{aligned} \tag{4}

In conclusion, the parametric equations for the brachistochrone are given by:

\begin{aligned} x(\phi) &= \frac{C}{2}\left(\phi - \sin\phi\right), \\ y(\phi) &= -\frac{C}{2}(1 - \cos\phi). \end{aligned} \tag{5}

These equations describe a cycloid curve.


The geometric proof provided by Bernoulli contains spurious solutions, as noted in Sussmann and Willems (2001). While the variational approach presents challenges in proving the existence of the brachistochrone, applying optimal control theory simplifies this task. The existence of a solution can be directly established using the Ascoli-Arzelà theorem.

The brachistochrone as an optimal control problem

This example can be treated as an optimal control problem, the control is the angle \vartheta of descent of the particle. Splitting the velocity v in its two components along the xy axes, the problem can be states as

\begin{aligned} \dfrac{\mathrm{d}x}{\mathrm{d}t}&= v\sin \vartheta, \\[0.5em] \dfrac{\mathrm{d}y}{\mathrm{d}t}&= -v\cos \vartheta, \\[0.5em] \dfrac{\mathrm{d}v}{\mathrm{d}t}&= g\cos \vartheta \end{aligned} \tag{6}

with border conditions, if T is the time used to travel from A to B,

\begin{aligned} x(0)&=0, & y(0)&=0, & v(0)&=0,\\ x(T)&=x_B, & y(T)&=y_B, & v(T)&=v_T,\\ \end{aligned} \tag{7}

for a free final velocity v_T. The functional to be minimized is T.

A Numerical Example

Consider the brachistochrone problem for a particle released from the origin and traveling to the point B = (10, -3). In the variational formulation, we need to determine the constant C. This can be achieved using Equation 4. Solving the nonlinear system for C and \phi_B yields:

\phi_B \approx 4.17, \qquad C \approx 3.97.

The solution is illustrated in Figure Figure 1.

Figure 1: Variational solution to the brachistochrone problem: The left plot shows the trajectory of the particle from the origin to point B, while the right plot displays the two components of the solution: x in red and y in green.

When treated as an optimal control problem, as stated in equation Equation 6 with boundary conditions Equation 7, finding an analytical solution becomes challenging. For instance, the Hamiltonian function is given by:

\mathcal{H}(x,y,v,\lambda,\mu,\xi,\theta) = 1 + \lambda v \sin \vartheta - \mu v \cos \vartheta + \xi g \cos \vartheta.

According to Pontryagin’s Maximum Principle (PMP), the associated boundary value problem is:

\begin{aligned} \lambda' &= -\frac{\partial \mathcal{H}}{\partial x} = 0, \\ \mu' &= -\frac{\partial \mathcal{H}}{\partial y} = 0, \\ \xi' &= -\frac{\partial \mathcal{H}}{\partial v} = \lambda \sin \vartheta - \mu \cos \vartheta. \end{aligned} \tag{8}

Additionally, we have:

0 = \frac{\partial \mathcal{H}}{\partial \vartheta} = \lambda v \cos \vartheta + \mu v \sin \vartheta - \xi g \sin \vartheta.

To avoid solving this analytically, we can use the variational formulation’s results to inform our optimal control problem (OCP) solution. However, since equations Equation 5 are functions of the spatial angle \phi rather than time t, we need to establish a relationship between them using the time functional:

T(x,y) = \int_{0}^{\phi_B} \frac{\sqrt{x'(\phi)^2 + y'(\phi)^2}}{\sqrt{2gy(\phi)}}\,\mathrm{d}\phi. \tag{9}

A brief calculation shows that:

T(x,y) = \int_{0}^{\phi_B} \sqrt{\frac{C}{2g}}\,\mathrm{d}\phi = \phi_B\sqrt{\frac{C}{2g}} = k\phi_B \approx 1.87,

demonstrating the tautochrone (isochrone) property of the cycloid. Thus, we can parameterize time as t = k\phi, for t \in [0, k\phi_B].

From this perspective, the control \vartheta(t) must satisfy:

\begin{aligned} \vartheta(t) &= \arctan\left(\frac{y'(t)}{x'(t)}\right) + \frac{\pi}{2} \\ &= \arctan\left(-\frac{\sin(t/k)}{1 - \cos(t/k)}\right) + \frac{\pi}{2}. \end{aligned}

This leads to:

\begin{aligned} \sin\vartheta &= \frac{1 - \cos(t/k)}{\sqrt{2}\sqrt{1 - \cos(t/k)}}, \\ \cos\vartheta &= \frac{\sin(t/k)}{\sqrt{2}\sqrt{1 - \cos(t/k)}}. \end{aligned}

Consequently, the velocity becomes:

v(t) = \frac{\sqrt{2}C}{2k}\sqrt{1 - \cos(t/k)}.

The plots of velocity v(t) and control \vartheta(t) are shown in Figure Figure 2.

Figure 2: On the left is the plot of velocity v(t), and on the right is the control variable \vartheta(t).

The differential equations for the spatial components become:

\begin{aligned} x'(t) &= v(t)\sin\vartheta = \frac{C}{2}(1 - \cos(t/k)), \\ y'(t) &= -v(t)\cos\vartheta = -\frac{C}{2}(\sin(t/k)). \end{aligned}

To compute the multipliers \lambda, \mu, and \xi, we can use the Hamiltonian function. By substituting for \xi, we are left with a single function in one unknown, \lambda. This gives us \lambda = -k/C. Thus, we find that the three multipliers are:

\begin{aligned} \lambda &= -\frac{k}{C} \approx -0.11, \\ \mu &= -\frac{\lambda \sin(T/k)}{1 - \cos(T/k)} \approx -0.06, \\ \xi( &= \frac{\sqrt{2}C}{2gk}\frac{\lambda \sin(t/k) +\mu(1-\cos(t/k))}{\sqrt{1-\cos(t/k)}}. \end{aligned}

The plot of these costate variables is shown in Figure 3.

Figure 3: The three multipliers are represented as follows: \lambda in red, \mu in blue, and \xi in black.

Finally, using the ACADO toolkit to solve this same problem yields a solution depicted in Figure Figure 4.

Figure 4: Solution provided by ACADO.

The DIDO Problem

Consider a 2D curve defined parametrically, with the following conditions:

  • The curve begins and ends on the x-axis.

  • The total length of the curve is fixed.

The objective is to maximize the area enclosed between the curve and the x-axis, while maintaining the constraint that the curve’s length remains constant.

Formally, we seek the shape of the curve that maximizes the enclosed area, given that:

  1. The curve starts and ends on the x-axis.

  2. The total length of the curve is fixed.

This problem is often referred to as the DIDO problem, and it is a well-known problem in variational calculus and optimal control theory.

To compute the area inside a curve we require the Divergence Theorem:

The Divergence Theorem

The Divergence Theorem is a fundamental result in vector calculus, connecting the flow of a vector field through a closed surface to the behavior of the field within the volume enclosed by the surface. It provides a powerful tool for converting complex surface integrals into simpler volume integrals, and vice versa.

Statement of the Divergence Theorem (2D version)

Let S be a solid region in 2-dimensional space with a smooth (or piecewise smooth), closed boundary curve C = \partial S, oriented outward. If \bm{F} is a continuously differentiable vector field defined on S, the Divergence Theorem states:

\int_C \bm{F} \cdot \bm{n} \,\mathrm{d}s = \iint_S (\nabla \cdot \bm{F}) \,\mathrm{d}S,

where:

  • \bm{F}: The vector field (e.g., \bm{F}(x, y)).

  • \bm{n}: The unit normal vector to the curve C, oriented outward.

  • \nabla \cdot \bm{F}: The divergence of \bm{F}, defined as: \nabla \cdot \bm{F} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y}.

Code
# Creazione di un ovale deformato (ellisse con deformazione)
theta <- seq(0, 2 * pi, length.out = 500)
a <- 3   # Semi-asse maggiore
b <- 2   # Semi-asse minore
x <- a * cos(theta)
y <- b * sin(theta)

# Aggiunta di una deformazione meno marcata
deform <- function(x, y) {
  x <- x + 0.2 * sin(2 * y)
  y <- y + 0.1 * sin(x)
  return(data.frame(x = x, y = y))
}
oval <- deform(x, y)

# Calcolo delle derivate approssimate per il bordo deformato
dx <- c(diff(oval$x), oval$x[1] - oval$x[length(oval$x)]) # Ciclico per chiusura
dy <- c(diff(oval$y), oval$y[1] - oval$y[length(oval$y)])
norm <- sqrt(dx^2 + dy^2)
dx <- dx / norm  # Normalizzazione
dy <- dy / norm

# Calcolo delle normali (invertite, puntano verso l'esterno)
normal_x <- dy
normal_y <- -dx

# Selezione di un sottoinsieme di punti per mostrare le normali
n_normals <- 20
idx <- seq(1, nrow(oval), length.out = n_normals)

# Creazione di un campo vettoriale
grid_size <- 20
x_grid <- seq(-4, 4, length.out = grid_size)
y_grid <- seq(-3, 3, length.out = grid_size)
field <- expand.grid(x = x_grid, y = y_grid)
field$u <- -field$y      # Componente x del campo
field$v <- field$x       # Componente y del campo

# Filtra le frecce del campo per mantenere solo quelle dentro i limiti
field <- field %>%
  mutate(xend = x + 0.2 * u, yend = y + 0.2 * v) %>%
  filter(x >= -4 & x <= 4 & y >= -3 & y <= 3,  # Freccia iniziale
         xend >= -4 & xend <= 4 & yend >= -3 & yend <= 3)  # Freccia finale

# Filtra le normali per mantenere solo quelle dentro i limiti
normals <- data.frame(
  x = oval$x[idx],
  y = oval$y[idx],
  xend = oval$x[idx] + 0.5 * normal_x[idx],
  yend = oval$y[idx] + 0.5 * normal_y[idx]
) %>%
  filter(x >= -4 & x <= 4 & y >= -3 & y <= 3,  # Normale iniziale
         xend >= -4 & xend <= 4 & yend >= -3 & yend <= 3)  # Normale finale

# Generazione del grafico
ggplot() +
  # Riempimento trasparente dell'ovale deformato
  geom_polygon(data = oval, aes(x = x, y = y), fill = "blue", alpha = 0.2) +
  # Disegna l'ovale deformato (contorno)
  geom_path(data = oval, aes(x = x, y = y), color = "blue", linewidth = 2) +
  # Disegna il campo vettoriale (freccia filtrata)
  geom_segment(data = field, aes(x = x, y = y, xend = xend, yend = yend),
               arrow = arrow(length = unit(0.1, "cm")), color = "gray") +
  # Disegna le normali filtrate
  geom_segment(data = normals, aes(x = x, y = y, xend = xend, yend = yend),
               arrow = arrow(length = unit(0.1, "cm")), color = "red") +
  # Ritaglio del grafico e mantenimento dell'aspect ratio 1:1
  coord_fixed(xlim = c(-4, 4), ylim = c(-3, 3), ratio = 1) +
  theme_minimal() +
  labs(title = "Divergence Theorem", x = "X", y = "Y")

Figure 5: 2D schema of divergence theorem. Vector field, curve 2D border and normals.

Application of the Divergence Theorem for area computation

Consider a simple two-dimensional vector field:

\bm{F}(x, y) = \begin{pmatrix} \alpha x \\ (1-\alpha) y \end{pmatrix}

In this case, the vector field is a linear combination of x and y coordinates, where \alpha is a constant that determines the relative contribution of the x-component and the y-component of the vector field.

Divergence of the Vector Field

The divergence of a vector field is the sum of the partial derivatives of its components. For the vector field \bm{F}(x, y), we compute:

\nabla \cdot \bm{F}(x, y) = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y}

Substituting the components of \bm{F}(x, y):

\nabla \cdot \bm{F}(x, y) = \frac{\partial (\alpha x)}{\partial x} + \frac{\partial ((1-\alpha) y)}{\partial y}

Taking the derivatives:

\nabla \cdot \bm{F}(x, y) = \alpha + (1-\alpha) = 1

Thus, the divergence of \bm{F}(x, y) is 1 everywhere in the plane.

Surface Integral and the Divergence Theorem

Now, we apply the Divergence Theorem to compute the flux of the vector field through a closed curve (C). The curve is parametrized by:

C: (x(t), y(t)), \quad t \in [0, L]

The area enclosed by this curve can be computed as a surface integral of the divergence of the vector field over the region enclosed by C. Using the Divergence Theorem, the flux through the closed curve is:

|S| = \iint_S 1 \, \mathrm{d}S

This is the same as integrating the divergence \nabla \cdot \bm{F} = 1 over the surface, which reduces to the boundary integral:

|S| = \int_0^L \bm{F}(x(t), y(t)) \cdot \frac{\begin{pmatrix} y'(t) \\ -x'(t) \end{pmatrix}}{\sqrt{x'(t)^2 + y'(t)^2}} \sqrt{x'(t)^2 + y'(t)^2} \, \mathrm{d}t

In this expression:

  • The first term is the vector field \bm{F}(x(t), y(t)).
  • The second term is the normal to the curve, which is given by the vector \frac{\begin{pmatrix} y'(t) \\ -x'(t) \end{pmatrix}}{\sqrt{x'(t)^2 + y'(t)^2}} (the unit normal).
  • The third term is the length element of the curve, \sqrt{x'(t)^2 + y'(t)^2} \, \mathrm{d}t.

Expanding the integral:

|S| = \int_0^L \left( \alpha x(t) y'(t) - (1-\alpha) y(t) x'(t) \right) \mathrm{d}t

Simplifying for a Specific Curve

Suppose we have a curve that passes through the x-axis (i.e., y(t) = 0 for some segment of the curve). On this segment, y'(t) = 0. In this case, the expression simplifies, and if we choose \alpha = 1, the integral becomes:

|S| = \int_0^L x(t) y'(t) \, \mathrm{d}t

This formula computes the area enclosed by the curve, with the integral focusing on the (x)-coordinate’s contribution when the curve intersects the (x)-axis.

Final Expression for the Area

Finally, if the curve is closed and the initial and final points on the curve satisfy y(0) = y(L) = 0, the total enclosed area |S| is given by:

|S| = \pm \int_0^L x(t) y'(t) \, \mathrm{d}t

The sign of the area depends on the orientation of the curve:

  • Positive if the curve is counterclockwise (the normal points outward).

  • Negative if the curve is clockwise (the normal points inward).

This sign convention reflects the direction of the normal vector and is crucial for consistent computation of area in physics and engineering applications.

Formulation of Dido problem

Now we are ready to formulate the Dido problem

Minimize

s\int_0^L x \sin\theta\, \mathrm{d}t,\quad s=\textrm{sign}(b-a)

Under the constraints

\begin{aligned} x' = \cos\theta \\ y' = \sin\theta \end{aligned}

And boundary conditions

x(0) = b, \quad y(0) = 0, \quad x(L) = a, \quad y(L)=0

Solution of Dido problem

We start by defining two key functions:

\begin{aligned} H(x, y, \lambda_1, \lambda_2, \theta) &= \lambda_1 \cos\theta + (s x + \lambda_2) \sin\theta, \\ B(x_0, y_0, x_1, y_1, \omega_1, \omega_2, \omega_3, \omega_4) &= \omega_1(x_0 - b) + \omega_2(y_0) + \omega_3(x_1 - a) + \omega_4(y_1), \end{aligned}

where H represents the Hamiltonian function and B encodes boundary conditions. These functions are used to compute the adjoint equations and boundary conditions for the control problem.

Deriving the Adjoint Equations

From the Hamiltonian H, the adjoint equations are given by:

\begin{aligned} \lambda_1^\prime + H_x &= \lambda_1^\prime + s \sin\theta = 0, \\ \lambda_2^\prime + H_y &= \lambda_2^\prime = 0, \end{aligned}

which simplify to:

  1. \lambda_2(t) = \text{constant} = -s c_y,

  2. \lambda_1^\prime = -s \sin\theta = -s y^\prime,

yielding:

\lambda_1(t) = s(c_y - y(t)).

Boundary Conditions

The adjoint boundary conditions are derived from the B-function:

\begin{aligned} B_{x_0} + \lambda_1(0) &= \omega_1 + \lambda_1(0) = 0, \\ B_{y_0} + \lambda_2(0) &= \omega_2 + \lambda_2(0) = 0, \\ B_{x_1} - \lambda_1(L) &= \omega_3 - \lambda_1(L) = 0, \\ B_{y_1} - \lambda_2(L) &= \omega_4 - \lambda_2(L) = 0. \end{aligned}

Since these equations can all be satisfied by appropriately choosing the \omega parameters, we effectively eliminate the boundary conditions.

Control Equation

The control equation is derived from the partial derivative of H with respect to \theta:

\begin{aligned} H_\theta &= -\lambda_1 \sin\theta + (s x + \lambda_2) \cos\theta, \\ &= -s(c_y - y) \sin\theta + s(x - c_x) \cos\theta = 0. \end{aligned}

This eliminates \lambda_1(t) and \lambda_2(t) entirely. The reduced system is a Boundary Value Problem (BVP) that follows directly.

Reduced BVP

The resulting BVP is:

\left\{ \begin{aligned} x^\prime &= \cos\theta, \\ y^\prime &= \sin\theta, \\ 0 &= (x - c_x) \cos\theta + (y - c_y) \sin\theta, \\ x(0) &= b, \quad y(0) = 0, \quad x(L) = a, \quad y(L) = 0. \end{aligned} \right.


Geometric Interpretation

Define X = x - c_x and Y = y - c_y. The third equation becomes:

\begin{aligned} 0 &= (x - c_x) x^\prime + (y - c_y) y^\prime, \\ &= X X^\prime + Y Y^\prime, \\ &= \frac{1}{2} \frac{d}{dt} \left(X^2 + Y^2\right). \end{aligned}

Integrating this relation shows that:

R^2 = X^2 + Y^2 = (x - c_x)^2 + (y - c_y)^2,

which is the equation of a circle with radius R, centered at (c_x, c_y).


The trajectory described by (x(t), y(t)) is constrained to lie on a circle of radius R centered at (c_x, c_y). The constansts are otained by matching the boundary conditions.

An Optimal Control Problem with constraints

The following problem involves a linear control which is bounded, and a constraint on both state and control. The formulation is

\begin{aligned} \textrm{minimize:}\quad & \int_{0}^3(t-4)u\,\mathrm{d}t, \\ \textrm{subject to the dynamic:}\quad & x'=u, \\ \textrm{constrained by:}\quad & x(0)=0,\quad x(3)=3, \quad 0\leq u \leq 2,\\ \textrm{and:}\quad & g(t,x,u)=x-t-u+1\leq 0. \end{aligned}

The Hamiltonian for the problem is given by:

\begin{aligned} \mathcal{H} &= (t - 4)u + \lambda u + \mu (x - t - u + 1) \\ &= (t - 4 + \lambda - \mu)u + \mu(x - t + 1). \end{aligned}

At t = 0, we find that the constraint g is satisfied only if g(0) = 1 - u(0) \leq 0. This implies that the initial control must be u(0) \geq 1.

From the Hamiltonian, we derive the equations necessary to form the boundary value problem:

\begin{aligned} \frac{\partial \mathcal{H}}{\partial u} &= t - 4 + \lambda - \mu = 0, \\ \lambda' &= -\frac{\partial \mathcal{H}}{\partial x} = -\mu, \quad \mu \geq 0. \end{aligned}

The multiplier of the constraint, \mu, should be non-negative when the bound is active (i.e., when g = 0) and should equal zero when the bound is inactive.

Suppose the constraint is inactive, which implies \mu = 0. In this case, the boundary value problem yields \lambda = c, where c is a constant derived from the equation \frac{\partial \mathcal{H}}{\partial x}. However, from the equation \frac{\partial \mathcal{H}}{\partial u}, we get \lambda = 4 - t.

This leads to a contradiction: there cannot be an arc where the constraint is inactive. Therefore, on the interval [0,3], the constraint must be sharp. By combining the two equations \lambda' = -\mu and t - 4 + \lambda - \mu = 0, we obtain the differential equation:

t - 4 + \lambda + \lambda' = 0.

Its solution is \lambda(t)=\alpha \mathrm{e}^{-t}-t+5 for an unknown constant \alpha. From the multiplier we can get \mu=-\lambda'=\alpha \mathrm{e}^{-t}+1 which is non negative on [0,3] for \alpha\geq \mathrm{e}^3\approx 20.08553692.

The control is given differentiating g=0 with the combination of x'=u , that is x'-1-u'=u-u'-1=0. The initial value of the control u(0)=1 allows to solve the differential equation yielding u(t)=1. Now from g=0 we obtain x(t)=t.

The presence of a minimum is given by the application of the Weierstrass condition \mathcal{H}(u)-\mathcal{H}(u^\star)>0, where u^\star is the optimal control and u>1 is another admissible control.

Observing that \mu g=0 the Weierstrass condition must be checked only for (t-4+\lambda)(u-u^\star)>0. Notice that (t-4+\lambda) is strictly positive for \alpha\geq \mathrm{e}^3 and u>u^\star=1, see Figure 6.

Figure 6: The plot of t-4+\lambda.

Dubins Car

The Dubins car model represents a simple car with three degrees of freedom, conceptualized as a rigid body moving in a two-dimensional plane. In this model, the back wheels can only slide, which complicates maneuvers like parallel parking.

If all wheels could rotate freely, parking would be a straightforward task.

The position of the car is described by the triplet (x, y, \theta) \in \mathbb{R}^2 \times \mathbb{S}, where x and y represent the coordinates in the plane, and \theta denotes the angle of the car relative to the x-axis.

From the geometry of this model (with certain constants simplified to unity), we can derive the following system of differential equations:

\begin{aligned} x' & = \cos(\theta),\\ y' & = \sin(\theta),\\ \theta' & = u. \end{aligned}

The objective is to drive the car from a specified position to the origin in minimum time. The control variable u, which represents the steering angle rate change, is constrained to lie within the interval [-2, 2]. Therefore, the optimal control problem can be formulated as follows: \min T = \min \int_0^T 1 \,\mathrm{d}t \qquad s.t.\qquad |u|\leq 2 \qquad\mathrm{with}

\begin{aligned} x' & = \cos(\theta), & \quad x(0) &=4, & \quad & x(T)&=0,\\ y' & = \sin(\theta), & \quad y(0) &=0, & \quad & y(T)&=0,\\ \theta' & = u, & \quad \theta(0)&= \frac{\pi}{2}. \end{aligned}

The control variable u appears linearly in the equations, indicating the potential for a singular arc in the solution. The Hamiltonian for this problem is given by: \mathcal{H}(x,y,\theta,\lambda_1,\lambda_2,\lambda_3,u) = 1+\lambda_1\cos(\theta)+\lambda_2\sin(\theta)+\lambda_3u.

From the PMP,

u=\mathop{\textrm{argmin}}\limits_{u\in[-2,2]} \mathcal{H}(x,y,\theta,\lambda_1,\lambda_2,\lambda_3,u)= u=\mathop{\textrm{argmin}}\limits_{u\in[-2,2]} \lambda_3 u

therefore we can write

u=\left\{ \begin{array}{ccl} 2 & \mathrm{ if } & \lambda_3 <0,\\ ? & \mathrm{ if } & \lambda_3 =0,\\ -2 & \mathrm{ if } & \lambda_3 >0.\\ \end{array} \right.

The equation of the costate are derived from the Hamiltonian,

\begin{aligned} -\lambda_1' & = \dfrac{\partial \mathcal{H}} {\partial x} = 0,\\ -\lambda_2' & = \dfrac{\partial \mathcal{H}} {\partial y} = 0,\\ -\lambda_3' & = \dfrac{\partial \mathcal{H}} {\partial \theta} = -\lambda_1\sin\theta+\lambda_2\cos\theta. \\ \end{aligned}

From the previous equations, the multipliers \lambda_1 and \lambda_2 are real constants. Upon further differentiation of \lambda_3', we find that within the singular arc, \lambda_3'' = 0. Therefore, we can conclude that: \lambda_3''=\lambda_1\theta'\cos\theta+\lambda_2\theta'\sin\theta = \lambda_1u\cos\theta+\lambda_2u\sin\theta=0,

In the singular arc, we have the condition u(\lambda_1 \cos \theta + \lambda_2 \sin \theta) = 0, which implies that u = 0. Given this information about u, we can reconstruct the singular arc. Since \theta' = u = 0, it follows that \theta(t) = K, where K is a constant. Consequently, this leads to:

x' = \cos K \quad \text{and} \quad y' = \sin K.

Integrating these equations yields the singular arc:

\begin{aligned} x(t) &= t \cos K + K_1, \\ y(t) &= t \sin K + K_2, \end{aligned}

where K_1 and K_2 are two real constants.

Next, we analyze the non-singular arc, where we have:

\theta(t) = -2\,t\,\mathrm{sign}(\lambda_3) + k_3.

Using the initial condition \theta(0) = \frac{\pi}{2}, we can solve for k_3:

k_3 = \frac{\pi}{2}.

Letting m = -2\,\mathrm{sign}(\lambda_3), the associated arc becomes:

\begin{aligned} x'(t) & = \cos(mt+k_3) \quad\implies & x(t) & = \frac{1}{m}\sin(mt+\pi/2)+k_1,\\ y'(t) & = \sin(mt+k_3) \quad\implies & y(t) & = -\frac{1}{m}\cos(mt+\pi/2)+k_2. \end{aligned}

We now assume that there is only a singular arc in the interval (t_A, T], where t_A represents the unknown switching instant. This assumption is supported by the fact that the final condition for \theta is free, leading to the conclusion that the associated multiplier \lambda_3 is zero at t = T.

Consequently, the arc is non-singular in the interval [0, t_A]. By examining the initial conditions, we can infer that \lambda_3 < 0 in this interval. This results in:

k_1 = 4 - \frac{1}{2} = \frac{7}{2} \quad \text{and} \quad k_2 = -\cos\left(\frac{\pi}{2}\right) = 0.

Thus, the associated trajectory is given by: \begin{aligned} x(t)&=\dfrac{1}{2}\cos(2t)+\dfrac{7}{2}, \\ y(t)&=\dfrac{1}{2}\sin(2t), \\ \theta(t)&=2t+\dfrac{\pi}{2}. \end{aligned}

We can now connect the first arc with the second, as the trajectory is a continuous function at t = t_A. This ensures a smooth transition between the two arcs. \begin{aligned} x(t_A) &=t_A\cos K + K_1=\dfrac{1}{2}\cos(2t_A)+\dfrac{7}{2},\\ y(t_A) &=t_A\sin K +K_2 = \dfrac{1}{2}\sin(2t_A),\\ \theta(t_A) &= K = 2t_A+\dfrac{\pi}{2}. \end{aligned}

We can also impose the final conditions

\begin{aligned} x(T) &=T\cos K + K_1=0,\\ y(T) &=T\sin K +K_2 = 0. \end{aligned}

This forms a nonlinear system of five equations involving the five unknowns K, K_1, K_2, t_A, and T. We will solve this system next.

By performing some algebraic manipulations, we can simplify the relationships among K, K_1, and K_2. This reduction leads us to two equations with two unknowns: T and t_A. \begin{aligned} -T\sin(2t_A)+t_A\sin(2t_A)+\dfrac{1}{2}\cos(2t_A)+\dfrac{7}{2}&=0,\\ T\cos(2t_A)-t_A\cos(2t_A)+\dfrac{1}{2}\sin(2t_A)&= 0. \end{aligned}

By multiplying the first equation by \cos(2t_A) and adding it to the second equation multiplied by \sin(2t_A), we can combine these terms. Similarly, if we multiply the first equation by \sin(2t_A) and add it to the second equation multiplied by \cos(2t_A), we obtain: \begin{aligned} \dfrac{1}{2}+\dfrac{7}{2}\cos(2t_A)&=0,\\ -T+t_A+\dfrac{7}{2}\sin(2t_A)&= 0. \end{aligned}

From this pair of expressions, it becomes straightforward to solve for the switching instant t_A and the final (minimum) time T. Consequently, we can also determine the three constants K, K_1, and K_2. \begin{aligned} t_A & = \dfrac{\pi}{2}-\dfrac{1}{2}\arccos\dfrac{1}{7} &\approx & 0.857071948,\\ T & = \dfrac{\pi}{2}-\dfrac{1}{2}\arccos\dfrac{1}{7}+2\sqrt{3} & \approx & 4.321173564, \\ K & = \dfrac{3\pi}{2}-\arccos\dfrac{1}{7} & \approx & 3.284940223, \\ K_1 & = \dfrac{2\sqrt{3}}{7}\left(\pi-\arccos\dfrac{1}{7}+4\sqrt{3}\right) & \approx & 4.276852666, \\ K_2 & = \dfrac{\pi}{14}-\dfrac{1}{14}\arccos\dfrac{1}{7}+\dfrac{2\sqrt{3}}{7} & \approx & 0.6173105091. \end{aligned}

These constants allow us to fully characterize the state of the system at any time t \in [0, T]. Next, we need to specify the costate. We have already established that \lambda_1 and \lambda_2 are constant. For \lambda_3, we have the differential equation given by

-\frac{\partial \mathcal{H}}{\partial \theta} = 0

for the singular arc. To set up a nonlinear system involving the three unknowns \lambda_1, \lambda_2, and \lambda_3, we require two additional equations. One of these can be derived from the Hamiltonian itself, which is autonomous and thus equal to zero. The third equation can be expressed as the multiplier in the interval [0, t_A]:

\lambda_3(t) = -\frac{\lambda_1}{2}\cos(2t + k_3) - \frac{\lambda_2}{2}\sin(2t + k_3) + \lambda_3(0).

Here, we introduce a fourth unknown, \lambda_3(0), but we do not require an additional equation for it. Consequently, the nonlinear system at t = t_A is given by:

\begin{aligned} \mathcal{H} &= 1+\lambda_1\cos(\theta)+\lambda_2\sin(\theta)+\lambda_3u = 0,\\ \lambda_3'(t_A) & = -\dfrac{\partial \mathcal{H}} {\partial \theta} = \lambda_1\sin\theta-\lambda_2\cos\theta=0,\\ \lambda_3(t_A) & = -\lambda_1/2\cos(2t_A+k_3)-\lambda_2/2\sin(2t_A+k_3)+\lambda_3(0). \end{aligned}

The solution of the system gives the following expressions,

\begin{aligned} \lambda_1 & = \dfrac{\cos(k_3)}{\sin(2t_A)\cos k_3 - \sin K\cos(2t_A)}=\dfrac{4}{7}\sqrt{3} &\approx & 0.9897433188 ,\\ \lambda_2 & = \dfrac{\sin(k_3)}{\sin(2t_A)\cos k_3 - \sin K\cos(2t_A)} =\dfrac{1}{7}&\approx & 0.1428571429 ,\\ \lambda_3(0) & = -\dfrac{1}{2}. \end{aligned}

In Figure Figure 7 the plots for state, costate, control and trajectory.

Figure 7: Variational solution to the Dubins car problem. On the top left the states , on the right the costates, below the control and the trajectory.

Four Singular Optimal Control Problems

In this chapter, we present four optimal control problems (OCPs) that feature singular arcs. Three of these examples are drawn from the PROPT handbook, which accompanies a commercial software tool for solving OCPs.

Most of these problems are discussed in Luus (2000), though only a few provide exact solutions. Here, we present the analytical solutions to some of these problems, enabling a precise comparison with the numerical solutions generated by the software used for benchmarking.

Optimal Control Problem with Singular Controls

In optimal control theory, when the Hamiltonian \mathcal{H} is linear in the control variable u, the solution may exhibit discontinuities. If the switching function associated with the control is not sustained over an interval—meaning the coefficient of u in \mathcal{H} equals zero only at isolated points—the control is classified as bang-bang. A bang-bang control always takes on extreme values from the control set.

Conversely, if the coefficient of u in \mathcal{H} is zero over an interval, the control is termed singular. In this case, the choice of u must be derived from information beyond Pontryagin’s maximum principle. The moments when the optimal control transitions between states or switches to singular controls are referred to as switch times, which can be challenging to identify.

Singular 1

Consider the problem of minimizing the functional:

\begin{aligned} \text{minimize:} \quad & \int_{-1}^1 (x - 1 + t^2)^2 \,\mathrm{d}t, \\ \text{subject to:} \quad & x' = u, \\ \text{with control bounded by:} \quad & |u| \leq 1. \end{aligned}

The Hamiltonian for this problem is given by:

\mathcal{H} = (x - 1 + t^2)^2 + \lambda u + p_\varepsilon(u),

where p_\varepsilon(u) is a penalty function designed to manage constrained controls within the interval [-1, 1]. An example of such a function is

p_\varepsilon(u) = -\varepsilon \ln\left(\cos\left(\frac{\pi}{2} u\right)\right).

As \varepsilon \to 0, this function approaches zero within the interval [-1, 1] and increases to infinity outside that range. Consequently, the Pontryagin maximum principle becomes ineffective in this scenario.

The stationarity condition for the optimal control problem is:

\frac{\partial \mathcal{H}}{\partial u} = \lambda + \frac{\partial p_\varepsilon(u)}{\partial u} = 0.

This leads to:

\begin{aligned} \lambda(t) &= -\varepsilon \frac{\pi}{2} \tan\left(\frac{\pi}{2} u(t)\right) \implies \\ u(t) &= \lim_{\varepsilon \to 0} -\frac{2}{\pi} \arctan\left(\frac{2\pi \lambda(t)}{\varepsilon}\right) = -\mathrm{sign}(\lambda(t)). \end{aligned}

Thus, when \lambda \neq 0, the control takes values u = \pm 1. When \lambda = 0, the control is singular.

The state is not specified at the boundaries, leading to the transversality conditions \lambda(-1) = \lambda(1) = 0. Differentiating the Hamiltonian with respect to x provides information about the multiplier \lambda:

\lambda' = -\frac{\partial \mathcal{H}}{\partial x} = -2(x - 1 + t^2).

Integrating this expression yields:

\lambda(t) = -2\int_{-1}^t (x(s) - 1 + s^2) \,\mathrm{d}s + k, \tag{10}

where k \in \mathbb{R} can be determined using the initial condition \lambda(-1) = 0:

0 = \lambda(-1) = -2\int_{-1}^{-1} (x(s) - 1 + s^2) \,\mathrm{d}s + k \implies k = 0.

Substituting k = 0 into equation Equation 10 allows us to write:

\begin{aligned} 0 &= \lambda(1) = -2\int_{-1}^{-1} (x(s) - 1 + s^2) \,\mathrm{d}s \implies \\ & \int_{-1}^{-1} (x(s) - 1 + s^2) \,\mathrm{d}s = 0. \end{aligned}

For t in the interval [t_A, t_B] \subset [-1, 1], where \lambda(t) = 0, we have:

\begin{aligned} 0 &= \lambda(t) \\ &= -2\int_{-1}^{t_A} (x(s) - 1 + s^2)\,\mathrm{d}s - 2\int_{t_A}^{t} (x(s) - 1 + s^2)\,\mathrm{d}s \\ &= \lambda(t_A) - 2\int_{t_A}^{t} (x(s) - 1 + s^2)\,\mathrm{d}s. \end{aligned} \tag{11}

Differentiating the right-hand side gives:

\begin{aligned} & 0 = \frac{\mathrm{d}}{\mathrm{d}t}\left(-2\int_{t_A}^{t} (x(s) - 1 + s^2)\,\mathrm{d}s\right) \\ & \Downarrow \\ & x(t) - 1 + t^2 = 0, \quad t \in [t_A, t_B]. \end{aligned}

Thus, when the control is singular, we find that

x(t) = 1 - t^2,

and from the dynamics of the problem,

u(t) = -2t \quad \text{for } t \in [t_A, t_B].

It is important to note that for |t| > 1/2, the control u(t) = -2t does not satisfy the imposed bounds. Therefore, we need to identify two switching times, t_A and t_B, such that x' = u and |u| \leq 1. For t \in [-1, t_A), we have u(t) = 1, and for t \in (t_B, 1], we have u(t) = -1. The corresponding state equations are then given by:

  • For t < t_A: x(t) = t + a
  • For t > t_B: x(t) = -t + b.

From Equation 11 and the fact that for t\in [t_A,t_B], x(t)=1-t^2, it follows that \lambda(t_A)=0, i.e,

\begin{aligned} 0=\lambda(t_A)& = -2\int_{-1}^{t_A} (x(s)-1+s^2)\,\mathrm{d}s\\ & = -2 \int_{-1}^{t_A} (s+a-1+s^2)\,\mathrm{d}s\\ & = -2\left[\dfrac{1}{3}(t_A+1)\left(t_A^2+\dfrac{1}{2}t_A-\dfrac{7}{2}+3a\right)\right]. \end{aligned}

This expression is equal to zero for

a= -\dfrac{1}{3}t_A^2-\dfrac{1}{6}t_A+\dfrac{7}{6} \implies x(t)=t-\dfrac{1}{3}t_A^2-\dfrac{1}{6}t_A+\dfrac{7}{6}. \tag{12}

Now, because x is continuous, in the switching point must hold t_A+a=1-t_A^2. From that equation it is possible to solve t_A=-\frac{1}{4} and a=\frac{19}{16}.

With similar consideration it is possible to solve b and t_B: in fact,

\begin{aligned} 0&=\lambda(1)=\lambda(t_B)-2\int_{t_B}^1x(s)-1+s^2\,\mathrm{d}s \implies \\ &\quad\int_{t_B}^1x(s)-1+s^2\,\mathrm{d}s = 0. \end{aligned}

Using the expression x(t)=-t+b for t\in(t_B,1], the previous integral yields

\begin{aligned} 0&=\int_{t_B}^1-s+b-1+s^2\,\mathrm{d}s \\ &=-\dfrac{1}{3}(t_B-1)\left(t_B^2-\dfrac{1}{2}t_B-\dfrac{7}{2}+3b\right), \end{aligned}

which gives the same result of equation Equation 13,

b= -\dfrac{1}{3}t_B^2-\dfrac{1}{6}t_B+\dfrac{7}{6} \implies x(t)=-t-\dfrac{1}{3}t_B^2-\dfrac{1}{6}t_B+\dfrac{7}{6}. \tag{13}

Again, for the continuity of x, -t_B+b=1-t_B^2, implies that t_B=\frac{1}{4} and b=\frac{19}{16}. This also shows the symmetry of the solution. With this optimal control u and state x the minimum of the functional is

\begin{aligned} \int_{-1}^1(x-1+t^2)^2\,\mathrm{d}t &= \int_{-1}^{-1/4}\left(t+\dfrac{9}{16}-1+t^2\right)^2\,\mathrm{d}t\\ &+\int_{1/4}^{1}\left(-t+\dfrac{9}{16}-1+t^2\right)^2\,\mathrm{d}t\\ & = \dfrac{9}{1280}= 0.00703125. \end{aligned}

Collecting the three phases the analytical solution of the optimal control problem is,

\begin{aligned} x(t) & = \left\{ \begin{aligned} t+a & \textrm{ for } & t\in[-1,t_A)& = \left[-1,-\frac{1}{4}\right)\\ 1-t^2 & \textrm{ for } & t\in[t_A,t_B]& = \left[-\frac{1}{4},\frac{1}{4}\right],\\ -t+b & \textrm{ for } & t\in(t_B,1]& = \left(\frac{1}{4},1\right] \end{aligned}\right. \quad a=b=\frac{19}{16},\\[1em] % u(t)& = \left\{ \begin{aligned} 1 & \textrm{ for } & t\in[-1,t_A) & = \left[-1,-\frac{1}{4}\right)\\ -2t & \textrm{ for } & t\in[t_A,t_B] & = \left[-\frac{1}{4},\frac{1}{4}\right]\\ -1 & \textrm{ for } & t\in(t_B,1] & = \left(\frac{1}{4},1\right] \end{aligned}\right.\\[1em] \lambda(t)& = \left\{ \begin{aligned} -\frac{2}{3}t^3-t^2-\frac{3}{8}t-\frac{1}{24} & \textrm{ for } & t\in[-1,t_A)& = \left[-1,-\frac{1}{4}\right)\\ 0 & \textrm{ for } & t\in[t_A,t_B]& = \left[-\frac{1}{4},\frac{1}{4}\right],\\ -\frac{2}{3}t^3+t^2-\frac{3}{8}t+\frac{1}{24} & \textrm{ for } & t\in(t_B,1]& =\left(\frac{1}{4},1\right]. \end{aligned}\right. \end{aligned}

A graphical representation of the state x, the control u and the costate \lambda is shown in Figure 8.

Figure 8: Variational solution to the singular arc problem. On the left the trajectory of the state x(t), in the middle the control u(t) and on the right the costate \lambda(t).

In Figure 9 the numerical solution obtained with ACADO.

Figure 9: Numerical solution given by ACADO.

Singular 2

The first example is relatively straightforward from an analytical standpoint and could be approached using intuition alone. However, it has been explored by several researchers—including Jacobson, Chen, Huang, and Luus (refer to Example 10.2.1)—who have shown that achieving a reliable numerical solution is more challenging than it initially appears. The dynamical system is characterized by two equations and a functional:

\begin{aligned} \textrm{minimize:}\quad & x_{2}(T) \\ \textrm{subject to the dynamic:}\quad & x_{1}(t)'=u(t), \\ & x_{2}(t)'=\dfrac{1}{2}x_{1}(t)^{2}, \\ \textrm{with control bounded by:}\quad & |u|\leq 1. \end{aligned}

The boundary conditions provided are x_1(0) = 1 and x_2(0) = 0, leading to the conditions \lambda_1(T) = 0 and \lambda_2(T) = 1, where T is the final time, set to T = 2. The Hamiltonian for this problem is given by: \mathcal{H}=\lambda_{1} u+\dfrac{1}{2}\lambda_{2}x_{1}^{2}.

The equation of the costate are derived from the Hamiltonian,

\begin{aligned} -\lambda_1' & = \dfrac{\partial \mathcal{H}} {\partial x_{1}} = -\lambda_{2}x_{1},\\ -\lambda_2' & = \dfrac{\partial \mathcal{H}} {\partial x_{2}} = 0, \end{aligned}

Thus, we have \lambda_{2}(t) = 1, which is constant. Assuming there is only one switching point, we can infer that the singular arc is the second one. By deriving the multiplier \lambda_{1} in the singular part, we obtain the singular control. From the equation \lambda_{1}' = -x_{1}, we find that \lambda_{1}'' = -x_{1}' = -u = 0. This indicates that the control is zero within the singular arc.

Next, we can integrate the dynamical system to determine both the state and the costate. The resulting expressions are:

\begin{aligned} x_{1}(t) & = K, \\ x_{2}(t) & = \dfrac{1}{2}K^{2}t+K_{2},\\ \lambda_1(t) & = -Kt+2K,\\ \lambda_2(t) & = 1, \end{aligned}

for some constants K and K_2 that need to be determined, and for t \in [t_A, T], where t_A is the unknown switching time.

In the first arc, the control is derived using Pontryagin’s maximum principle, resulting in a constant control of u(t) = -1 for t \in [0, t_A). By applying the initial conditions, we can solve for the state, yielding: \begin{aligned} x_{1}(t) & = -t+1, \\ x_{2}(t) & = \dfrac{1}{6}t^{3}-\dfrac{1}{2}t^{2}+\dfrac{1}{2}t,\\ \lambda_1(t) & = \dfrac{1}{2}t^{2}-t+\lambda_{0},\\ \lambda_2(t) & = 1, \end{aligned}

for t \in [0, t_A). By applying the continuity of the states at the switching point, we can establish a nonlinear system involving the unknowns K, K_2, \lambda_0, and t_A. This system yields two solutions: one lacks physical significance, while the other provides t_A = 1, K = 0, K_2 = \frac{1}{6}, and \lambda_0 = \frac{1}{2}. Consequently, the corresponding value for x_2(T) is \frac{1}{6}. The plots of the system are illustrated in Figure 10.

Figure 10: Variational solution to Singular Problem 1. On the left the states, on the right the costates and the control.

Singular 3

Constrained 1d vehicle

In this section, we analyze the model of a one-dimensional vehicle. We will explore various types of control strategies: a linear control based on jerk and a quadratic control on jerk. The linear control results in both singular and bang-bang arcs, while the quadratic control provides a smoother control profile.

The optimal control problem can be formulated as follows: \min \int_0^T 1+ wJ^2 \,\mathrm{d}t

where w is set to zero in the first case, and w=0.1 in the second case, subject to the following dynamic and path constraints:

\begin{aligned} s' & = v\\ v' & = a(p)\\ p' & = J \end{aligned}

with boundary conditions:

\begin{aligned} s(0) & = 0, \quad & s(T) & = 200\\ v(0) & = 10, \quad & v(T) & = 10\\ p(0) & = 0, \quad & p(T) & = \mathrm{free}.\\ \end{aligned}

The path constraint is

a_{\min} -a(p) \leq 0, \qquad a(p)= p\cdot \begin{cases} a_1 & p\geq 0,\\ a_2 & p<0. \end{cases}

where the constants are a_{\min} =-5, a_1=3 and a_2=10. The state p is constrained in [-1,1] and the control J\in[-1,1].

Jerk as quadratic control

Figure 11: In red, green and blue respectively, on top the three states s, v and p, below the corresponding multipliers \lambda_1, \lambda_2, \lambda_3.}

Figure 12: On the left the control J, on the right the acceleration a(t).

Singular 4

Dadebo example 1

Here we analyze the problem proposed by many researchers which is stated in the paper by Dadebo et McAuley.

\textrm{minimize:}\quad y(1)

subject to the dynamical evolution

\begin{aligned} x' &=u\\ y' & = x^2+u^2, \end{aligned}

with boundary conditions x(0)=x(1)=1 and y(0)=0.

The Hamiltonian for this problem is

\mathcal{H}=\lambda u +\mu (x^2+u^2),

The adjoint equations are given by \lambda' = -2\mu x and \mu' = 0, with the boundary condition \mu(1) = 1. The control is defined as u = -\frac{\lambda}{2\mu}. Consequently, the multiplier \mu is constant, so we have \mu(t) = 1.

Differentiating the equation for \lambda', we obtain \lambda'' = -2x' = -2u = \lambda. This leads to the characteristic equation, which has the general solution:

\lambda(t) = \alpha e^t + \beta e^{-t},

where \alpha and \beta are unknown constants. From the multiplier \lambda, we can express the control as:

u(t) = -\frac{\alpha}{2} e^t - \frac{\beta}{2} e^{-t}.

Finally, the state x is determined by the differential equation x' = u, yielding:

x(t) = -\frac{\alpha}{2} e^t + \frac{\beta}{2} e^{-t} + c,

where c is a constant. The state y can be obtained from the equation y' = x^2 + u^2, which results in:

\begin{aligned} x^2 & = \dfrac{\alpha^2}{4}\mathrm{e}^{2t}+\dfrac{\beta^2}{4}\mathrm{e}^{-2t}+c^2-\dfrac{\alpha\beta}{2}-\alpha c \mathrm{e}^t+\beta c \mathrm{e}^{-t}\\ u^2 & = \dfrac{\alpha^2}{4}\mathrm{e}^{2t}+\dfrac{\beta^2}{4}\mathrm{e}^{-2t}+\dfrac{\alpha\beta}{2}\\ x^2+u^2 & = \dfrac{\alpha^2}{2}\mathrm{e}^{2t}+\dfrac{\beta^2}{2}\mathrm{e}^{-2t}+c^2 -\alpha c \mathrm{e}^t+\beta c \mathrm{e}^{-t}. \end{aligned}

Therefore, by integrating the last equation, we can compute the state y, which is given by: y(t)=\dfrac{\alpha^2}{4}\mathrm{e}^{2t}-\dfrac{\beta^2}{4}\mathrm{e}^{-2t}+c^2t-\alpha c \mathrm{e}^t-\beta c \mathrm{e}^{-t}+k

for an unknown constant k, which can be determined using the boundary conditions. We can find the four constants \alpha, \beta, c, and k by solving the following nonlinear system:

\begin{aligned} x(0) &=-\dfrac{\alpha}{2}+\dfrac{\beta}{2}+c=1\\ x(1) &= -\dfrac{\alpha}{2}e+\dfrac{\beta}{2}\mathrm{e}^{-1}+c=1\\ y(0) &= \dfrac{\alpha^2}{4} -\dfrac{\beta^2}{4} -\alpha c -\beta c +k = 0\\ \mathcal{H}(t=0)&=\mathcal{H}(t=1) \quad\implies\quad -\alpha c+\beta c = -\alpha ce+\beta c\mathrm{e}^{-1}. \end{aligned}

The last equation arises from the constancy of the Hamiltonian along an optimal control and can be simplified to c (\beta - \alpha) = c (\beta e^{-1} - \alpha e). Thus, if c \neq 0, we find that \alpha = -\beta = 0 and c = 1. Substituting these values into the third equation of the nonlinear system yields k = 0, resulting in one extremal solution given by x(t) = 1, y(t) = t, and u(t) = 0. Consequently, the minimized functional is y(1) = 1.

However, the nonlinear system also admits another solution when c = 0. From the first equation, we have \beta = \alpha + 2, which, when substituted into the second equation, gives \alpha = \frac{-2}{1 + e} and \beta = \frac{2e}{1 + e}. Plugging these values into the third equation results in k = \frac{e - 1}{e + 1}.

This alternative solution yields a lower minimized functional compared to the previous one, achieving an optimal value of:

y(1)=2\dfrac{e-1}{e+1}\approx 0.9242343144.

Free time problems

Example 1

Given the free time optimal control problem

\begin{aligned} \textrm{minimize:} \qquad & \int_0^T \big(1+t^2\big) u^2 + y\, \mathrm{d}t \\ \textrm{subject to:} \qquad & x' = y + u \qquad y' = xy \\ & x(0) = 0\qquad y(T) = 1 \end{aligned}

the first thing to do is to rewrite the statement in a form where the integral bound are known by performing a change of variable \zeta that we arbitrary set to be defined in the domain [0,1] resulting in the transformation \zeta = t/T. The optimal control problem can so be stated as

\begin{aligned} \textrm{minimize:} \quad & \int_0^1 \Big( \big(1+\zeta^2 T(\zeta)^2\big) u^2(\zeta) + y(\zeta) \Big) T(\zeta)\mathrm{d}\zeta \\ \textrm{subject to:} \quad & \left\{ \begin{aligned} x'(\zeta) & = T(\zeta)\big( y(\zeta) + u(\zeta)\big) \\ y'(\zeta) & = T(\zeta)\,x(\zeta)\,y(\zeta) \\ T'(\zeta) & = 0 \end{aligned} \right. \\ \textrm{and BC:} \quad & \left\{ \begin{aligned} x(0) & = 0 \\ y(1) & = 1 \end{aligned} \right. \end{aligned}

The hamiltonian \mathcal{H} related to such problem is so

\mathcal{H}\left( \begin{pmatrix}x\\y\\T\end{pmatrix}, \begin{pmatrix}\lambda_1\\\lambda_2\\\lambda_3\end{pmatrix},u,\zeta\right) = T \big(1+\zeta^2 T^2\big) u^2 + T y - \lambda_1 T \big(y+u\big) - \lambda_2 T x y + 0 \lambda_3

Considering the original ordinary differential equation of the problem and the addition of the co-equations the system that needs to be solved is

\left\{ \begin{aligned} x'(\zeta) & = T(\zeta)\big( y(\zeta) + u(\zeta)\big) \\ y'(\zeta) &= T(\zeta) x(\zeta)y(\zeta) \\ T'(\zeta) &= 0 \\ \lambda_1'(\zeta) &= -\dfrac{\partial\mathcal{H}}{\partial x}= \lambda_2(\zeta) T(\zeta) y(\zeta) \\ \lambda_2'(\zeta) &= -\dfrac{\partial\mathcal{H}}{\partial y}= -T(\zeta) +\lambda_1(\zeta) +\lambda_2(\zeta) T(\zeta) x(\zeta) \\ \lambda_3'(\zeta) &= -\dfrac{\partial\mathcal{H}}{\partial T} = -u^2(1+3\zeta^2T^2) - \lambda_1(y+u) -(1+\lambda_2 x)y \end{aligned} \right.

on which the boundary conditions are

\begin{aligned} x(0) &= 0 \\ y(1) &= 1 \\ \lambda_1(1) &= 0 \\ \lambda_2(0) &= 0 \\ \lambda_3(0) &= \lambda_3(1) = 0 \end{aligned}

We also need to consider the control law

\frac{\partial\mathcal{H}}{\partial u}

that becomes

2T\big( 1 + \zeta^2 T^2\big)u - \lambda_1 = 0

and so we can explicit the control u as

u(\zeta) = \frac{\lambda_1(\zeta)}{2T(\zeta) \big(1 + \zeta ^2T(\zeta)^2\big)}

Example 2 (exam’s simulation)

Given the optimal control problem in the control u and states x,y

\begin{aligned} \textrm{minimize:} \qquad & \int_0^1 \big(1+x^2\big) u^2 \, \mathrm{d}t \\ \textrm{subject to:} \qquad & x' = xy - u \\ & y' = x + u \\ & \int_0^1\big(x+u\big) \,\mathrm{d}t = 2 \\ & y(0) = 2 \end{aligned}

the first thing to do is remove the integral constraint by adding a new state variable z such that

z' = x+u

with boundary conditions

z(0) = 0\quad\textrm{and}\quad z(1) =2

with that the problem becomes

\begin{aligned} \textrm{minimize:} \qquad & \int_0^1 \big(1+x^2\big) u^2 \, \mathrm{d}t \\ \textrm{subject to:} \qquad & x' = xy - u \\ & y' = x + u \\ & z' = x+u \\ & y(0) = 2 \quad z(0) = 0 \quad z(1) = 2 \end{aligned}

We can so compute the Hamiltonian \mathcal{H} and the utility function \mathcal{B} of the problem as

\begin{aligned} \mathcal{H}\left( \begin{pmatrix}x\\y\\z\end{pmatrix}, \begin{pmatrix}\lambda_1\\\lambda_2\\\lambda_3\end{pmatrix}, u, t \right) & = \big(1+x^2\big)u^2 + \lambda_1\big(xy -u\big) + \lambda_2 \big(x+u\big) + \lambda_3 \big(x+u\big) \\ \mathcal{B}\big(y_0,z_0,z_1, \mu_1, \mu_2,\mu_3\big) & = \mu_1\big(y_0-2\big) + \mu_2 z_0 + \mu_3\big(z_1-2\big) \end{aligned}

The control obtained by the Pontryagin maximum principle is the solution of u that verifies \frac{\partial\mathcal{H}}{\partial u}= 0, and so we have

\begin{aligned} \frac{\partial\mathcal{H}}{\partial u} &= 2\big(1+x^2\big)u -\lambda_1 + \lambda_2 +\lambda_3\\ & \Downarrow \\ u(t) &= \frac{\lambda_1(t) - \lambda_2(t) - \lambda_3(t)}{2\big(1+x(t)^2\big)} \end{aligned}

The additional adjoint ODE are described by the following derivatives:

\begin{aligned} \lambda_1' & = -\frac{\partial\mathcal{H}}{\partial x} = -2xu^2 - \lambda_1 y - \lambda_2 - \lambda_3 \\ \lambda_2' & = -\frac{\partial\mathcal{H}}{\partial y} = - \lambda_1 x \\ \lambda_3' & = -\frac{\partial\mathcal{H}}{\partial z} = 0 \end{aligned}

while the adjoint boundary conditions are

\begin{aligned} &\lambda_1(0) = - \frac{\partial\mathcal{B}}{\partial x_0} = 0 && \lambda_1(1) = \frac{\partial\mathcal{B}}{\partial x_1} = 0 \\ &\cancel{\lambda_2(0) = - \frac{\partial\mathcal{B}}{\partial y_0} = -\mu_1} && \lambda_2(1) = \frac{\partial\mathcal{B}}{\partial y_1} = 0 \\ &\cancel{\lambda_3(0)= - \frac{\partial\mathcal{B}}{\partial z_0} = -\mu_2} && \cancel{\lambda_3(1) = \frac{\partial\mathcal{B}}{\partial z_1} = \mu_3} \end{aligned}

Some expression have been cancelled out because they are trivially solved for \mu_i.

Thus the BVP become

\left\{\begin{aligned} & \left.\begin{aligned} x' & = xy-u\\ y' & = x+u \\ z' & = x+u \end{aligned}\right\} \quad\textrm{original ODE}\\ & \left.\begin{aligned} \lambda'_1 & = -2xu^2-\lambda_1 y -\lambda_2-\lambda_3\\ \lambda'_2 & = -\lambda_1 x\\ \lambda'_3 & = 0 \end{aligned}\right\} \quad\textrm{adjoint ODE}\\ & \left.\begin{aligned} y(0) & = 2 \\ z(0) & = 0 \\ z(1) & = 2 \end{aligned}\right\} \quad\textrm{original BC}\\ & \left.\begin{aligned} \lambda_1(0) & = 0 \\ \lambda_1(1) & = 0 \\ \lambda_2(1) &= 0 \end{aligned}\right\} \quad\textrm{adjoint BC} \\ u &= \frac{\lambda_1 - \lambda_2 - \lambda_3}{2\big(1+x^2\big)}\quad\textrm{Pontryagin} \end{aligned}\right.

Example 3 (free time problem, 2 controls)

Given the free time optimal control problem in the controls u,w and states x,y

\begin{aligned} \textrm{minimize:} \qquad & \int_0^T \big(x^2 + y^2\big) \, \mathrm{d}t \\ \textrm{subject to:} \qquad & x' = y u \\ & y' = x + w \\ & x(0) = 0 \quad y(0) = 0 \quad y(T) = 1 \\ & u,w \in [-1,1] \end{aligned}

the first thing to do is to transform the problem in fixed boundary optimal control problem by performing the change of variable t = \zeta\,T(\zeta) (with \zeta\in [0,1]) and so

\begin{aligned} \textrm{minimize:} \qquad & \int_0^1 T(\zeta)\big(x^2(\zeta) + y^2(\zeta)\big) \, \mathrm{d}\zeta \\ \textrm{subject to:} \qquad & x'(\zeta) = T(\zeta) y(\zeta) u(\zeta) \\ & y'(\zeta) = T(\zeta) \Big( x(\zeta) + w(\zeta) \Big) \\ & T'(\zeta) = 0 \\ & x(0) = 0 \quad y(0) = 0 \quad y(1) = 1 \\ & u(\zeta),w(\zeta) \in [-1,1] \end{aligned}

From now on all the functions will be represented as depending from the independent variable \zeta and it won’t be written any more. With that said we can compute the hamiltonian \mathcal{H} and the utility function \mathcal{B} of the problem as

\begin{aligned} \mathcal{H}\left( \begin{pmatrix} x\\y\\T \end{pmatrix}, \begin{pmatrix} \lambda_1\\\lambda_2\\\lambda_3 \end{pmatrix}, \begin{pmatrix} u \\ w \end{pmatrix} \right) & = T\big(x^2+y^2\big) + \lambda_1 T yu + \lambda_2 T\big(x+w\big) + 0\cdot\lambda_3 \\ \mathcal{B}\big(x_0,y_0,y_1, \mu_1, \mu_2, \mu_3\big)& = \mu_1 x_0 + \mu_2 y_0 + \mu_3 \big(y_1-1\big) \end{aligned}

The solution of the controls obtained by the Pontryagin principle becomes

\begin{aligned} ({\color{red}u},{\color{orange}w}) & = \mathop{\textrm{argmin}}_{({\color{red}u},{\color{orange}w})\in[-1,1]\times[-1,1]} \mathcal{H}\left( \begin{pmatrix} x\\y\\T \end{pmatrix}, \begin{pmatrix} \lambda_1\\\lambda_2\\\lambda_3 \end{pmatrix}, \begin{pmatrix} {\color{red}u} \\ {\color{orange}w} \end{pmatrix} \right) \\[2em] &=\mathop{\textrm{argmin}}_{({\color{red}u},{\color{orange}w})\in[-1,1]\times[-1,1]} {\color{DarkGreen}T\big(x^2+y^2\big)+ \lambda_2 Tx} + T(\lambda_1 y{\color{red}u} + \lambda_2 {\color{orange}w}) \\[2em] &=\mathop{\textrm{argmin}}_{({\color{red}u},{\color{orange}w})\in[-1,1]\times[-1,1]} T(\lambda_1 y{\color{red}u} + \lambda_2 {\color{orange}w}) \\[2em] &=\mathop{\textrm{argmin}}_{({\color{red}u},{\color{orange}w})\in[-1,1]\times[-1,1]} \lambda_1 y{\color{red}u} + \lambda_2 {\color{orange}w} \end{aligned}

to find the minimum you can use KKT

minimize

f({\color{red}u},{\color{orange}w})= \lambda_1 y{\color{red}u} + \lambda_2 {\color{orange}w}

subject to

-1 \leq {\color{red}u}\leq 1,\qquad -1 \leq {\color{orange}w}\leq 1,

Lagrangian

\mathcal{L}(u,w,\mu_1,\mu_2,\mu_3,\mu_4) = \lambda_1 y u + \lambda_2 w - \mu_1 (1-u) - \mu_2 (1+u) - \mu_3 (1-w) - \mu_4 (1+w)

Nonlinear sistem for Pontryagin

\left\{ \begin{aligned} \partial_u \mathcal{L} = \lambda_1 y + \mu_1 - \mu_2 & = 0 \\ \partial_w \mathcal{L} = \lambda_2 + \mu_3 - \mu_4 & = 0 \\ \mu_1 (1-u) & = 0 \\ \mu_2 (1+u) & = 0 \\ \mu_3 (1-w) & = 0 \\ \mu_4 (1+w) & = 0 \end{aligned} \right. \tag{14}

The solution of Equation 14 is the following:

\begin{aligned} {\color{red}u(\zeta)} &= \begin{cases} -1 \quad & \lambda_1 y > 0 \\[0.5em] \textrm{undefined} & \lambda_1 y = 0 \\[0.5em] +1 \quad & \lambda_1 y < 0 \end{cases}\\[1em] {\color{orange}w(\zeta)} & = \begin{cases} -1 \quad & \lambda_2 > 0 \\[0.5em] \textrm{undefined} & \lambda_2 = 0 \\[0.5em] +1 \quad & \lambda_2 < 0 \end{cases} \end{aligned}

remember that must be T>0 for T\leq 0 solution is discarded.

We that stated we can build the boundary value problem solution of the free time optimal control problem by adding at the initial ODEs and boundary conditions the the adjoint ones and the control law (related to Pontryagin minimum principle):

\left\{ \begin{aligned} & \left.\begin{aligned} x' & = T y u \\ y' & = T (x+w) \\ T' & = 0 \end{aligned}\right\} \quad\textrm{original ODE} \\[1em] & \left.\begin{aligned} \lambda_1' &= -\dfrac{\partial\mathcal{H}}{\partial x} = - 2xT - \lambda_2 T \\[1em] \lambda_2' & = - \dfrac{\partial\mathcal{H}}{\partial y} = -2Ty - \lambda_1 T u \\[1em] \lambda_3' & = - \dfrac{\partial\mathcal{H}}{\partial T} = -x^2 - y^2 - \lambda_1 yu - \lambda_2 (x+w) \end{aligned}\right\} \quad\textrm{adjoint ODE} \\[1em] & \left.\begin{aligned} x(0) &= 0 \\ y(0) &= 0 \\ y(1) & = 1 \end{aligned}\right\} \quad\textrm{original BC} \\ & \left.\begin{aligned} \lambda_1(1) & = 0 \\ \lambda_3(0) & = 0 \\ \lambda_3(1) & = 0 \end{aligned}\right\} \quad\textrm{adjoint BC} \\[1em] & \left. \begin{aligned} u &= \begin{cases} -1 \quad & \lambda_1 y > 0 \\ \textrm{undefined} & \lambda_1 y = 0 \\ 1 \quad & \lambda_1 y < 0 \\ \end{cases}\\[1em] w & = \begin{cases} -1 \quad & \lambda_2 > 0 \\ \textrm{undefined} & \lambda_2 = 0 \\ 1 \quad & \lambda_2 < 0 \\ \end{cases} \end{aligned} \right\} \quad\textrm{Pontryagin} \end{aligned} \right.

The solution of the boundary value problem are in the independent variable \zeta, so to have the final solution in the time variable t every control/state law must be evaluated in t = \zeta\, T(\zeta).

References

Luus, Rein. 2000. Iterative Dynamic Programming. 1st ed. Boca Raton, FL, USA: CRC Press, Inc.
Sussmann, Hector J., and Jan C. Willems. 2001. “The Brachystrochrone Problem and Modern Control Theory.” In Contemporary Trends in Nonlinear Geometric Control Theory and Its, Monroy-Perez (Eds); World Scientific Publishers.