Laplace Transform

Authors
Affiliation

Enrico Bertolazzi

University of Trento, Department of Industrial Engineering

Matteo Dalle Vedove

University of Trento, Department of Industrial Engineering

Introduction

The Laplace Transform, denoted as \mathscr{L}, is a powerful mathematical operator that converts a time-dependent function, f(t), into a function of a complex variable, \hat{f}(s). This transformation enables the analysis of functions in a new domain, often simplifying complex problems. The transition from the time domain to the complex domain is expressed as:

f(t) \mapsto \hat{f}(s) = \mathscr{L}\{f(t)\}(s)

However, not all functions f(t) are suitable for transformation. The existence of the Laplace Transform hinges on the convergence of the following integral:

\hat{f}(s) = \int_{0^-}^{\infty} f(t) \, e^{-st} \, \mathrm{d}t = \lim_{\varepsilon \to 0^+} \lim_{M \to \infty} \int_{-\varepsilon}^M f(t) \, e^{-st} \, \mathrm{d}t

This tool is especially powerful because it can transform differential equations from the time domain into algebraic equations in the s domain, making them significantly easier to solve.

Note

An analogy can be made with logarithms: the product of two numbers can be simplified using logarithmic rules, which facilitates calculations:

a \cdot b \mapsto \log a + \log b

Similarly, the Laplace Transform converts complex differential operations into simpler algebraic forms.

In mechanical systems, solving linear differential equations is often necessary to describe the system’s time response. This can be challenging with standard analytical techniques, such as the method of variation of parameters. However, the Laplace Transform streamlines this process:

  1. The differential equation is transformed into an algebraic equation using the Laplace Transform.
  2. The algebraic equation is analyzed to determine the frequency response of the system.
  3. The solution is converted back to the time domain using the Inverse Laplace Transform.

Visualization of the Laplace Transform

To visualize the effect of the Laplace Transform, consider the following example with an exponential function f(t) = e^{-at}. The Laplace Transform of this function is well-known:

f(t) = te^{-at} \quad \mapsto \quad \hat{f}(s) = \frac{1}{(s + a)^2}

Here’s a plot to illustrate the function and its Laplace Transform.

Code
# Define time and Laplace domain
t <- seq(0, 5, length=100)
a <- 1
f_t <- t * exp(-a * t)
s <- seq(0.1, 5, length=100)
F_s <- 1 / (s + a)^2

# Set up the plotting area with larger figure size
par(mfrow=c(1, 2), cex=0.8, cex.axis=0.8, cex.lab=0.8, cex.main=0.9)  # Two plots side by side

# Plot the time domain function
plot(t, f_t, type="l", col="blue", lwd=2,
     ylab="f(t)", xlab="Time (t)",
     main="Time Domain: f(t) = t * e^(-at)",
     ylim=c(0, max(f_t) * 1.1))  # Set y-limits for better visibility
grid()  # Add grid lines
legend("topright", legend=expression(f(t) == t * e^{-at}), col="blue", lwd=2)  # Add legend with LaTeX formatting

# Plot the Laplace Transform
plot(s, F_s, type="l", col="red", lwd=2,
     ylab="F(s)", xlab="s",
     main="Laplace Domain: F(s) = 1 / (s + a)^2",
     ylim=c(0, max(F_s) * 1.1))  # Set y-limits for better visibility
grid()  # Add grid lines

# Adjust y position for better vertical centering of the legend
legend("topright", legend=expression(F(s) == frac(1, s + a)), col="red", lwd=2, inset=c(0, -0.05))  # Add legend with LaTeX formatting and adjust position

Figure 1: The left plot shows the function f(t) = e^{-at} in the time domain, and the right plot shows the corresponding Laplace Transform \hat{f}(s) = \frac{1}{s + a}.

Transform Properties

The Laplace Transform possesses several important properties that facilitate calculations. One of the key properties to remember is that it acts as a linear operator:

Theorem 1 (linearity) The Laplace Transform exhibits a fundamental property known as linearity. For any functions

f(t) \mapsto \hat{f}(s) \quad \text{and} \quad g(t) \mapsto \hat{g}(s),

and for any real constants a, b \in \mathbb{R}, the linearity of the Laplace Transform can be expressed as follows:

h(t) := a \, f(t) + b \, g(t) \quad \mapsto \quad \hat{h}(s) := a \, \hat{f}(s) + b \, \hat{g}(s).

This means that the Laplace Transform of a linear combination of functions is equal to the same linear combination of their respective transforms.

Proof. The linearity property of the Laplace Transform can be demonstrated by applying the transform to a linear combination of two functions:

\mathscr{L}\left\{a \, f(t) + b \, g(t)\right\}(s) = \int_{0^-}^{\infty} \left( a \, f(t) + b \, g(t) \right) e^{-st} \, \mathrm{d}t.

Using the linearity of the integral, we can separate this into two distinct integrals:

\begin{aligned} &= a \int_{0^-}^{\infty} f(t) e^{-st} \, \mathrm{d}t + b \int_{0^-}^{\infty} g(t) e^{-st} \, \mathrm{d}t \\ &= a \, \hat{f}(s) + b \, \hat{g}(s). \end{aligned}

This confirms that the Laplace Transform of a linear combination of functions is equal to the same linear combination of their respective transforms.

However, it is important to note that not all functions allow for this property to hold. For specific cases where the integral may not converge, refer to Example Example 1 for further clarification.

Example 1 (Integrable and Non-Integrable Functions) Consider a piecewise function f_n(t), parameterized by n, defined for t \geq 0 as follows:

f_n(t) = \begin{cases} nt & 0 \leq t \leq \frac{1}{n} \\ 2 - nt & \frac{1}{n} < t \leq \frac{2}{n} \\ 0 & \text{otherwise} \end{cases}

As n \to \infty, this function converges to f(t), which is zero everywhere:

f(t) = \lim_{n \to \infty} f_n(t).

To confirm this, we can compute the integral of f_n(t) over the interval [0, \infty):

F_n = \int_0^\infty f_n(t) \, \mathrm{d}t = \frac{1}{n} \quad \xrightarrow{n \to \infty} \quad 0.

Thus, as n \to \infty, the area under the curve tends to zero, consistent with the function becoming zero everywhere.

Misleading Intuition

At first glance, one might assume that the limit and the integral can be interchanged:

F = \int_0^\infty \lim_{n \to \infty} f_n(t) \, \mathrm{d}t = \lim_{n \to \infty} \int_0^\infty f_n(t) \, \mathrm{d}t.

However, this is not always true. To illustrate this, consider another piecewise function g_n(t), defined as:

g_n(t) = \begin{cases} n^2 t & 0 \leq t < \frac{1}{n} \\ 2n - n^2 t & \frac{1}{n} < t < \frac{2}{n} \\ 0 & \text{otherwise} \end{cases}

Taking the limit as n \to \infty, we find that the function g(t) = \lim_{n \to \infty} g_n(t) = 0 everywhere. However, the computed area differs:

\lim_{n\to\infty} \int_0^\infty g_n(t)\, \mathrm{d}t = 1 \quad \neq \quad \int_0^\infty g(t)\, \mathrm{d}t = 0.

This demonstrates that, in general, the limit of the integral and the integral of the limit are not equivalent:

\lim_{n\to\infty} \int g_n(t) \mathrm{d}t \neq \int \lim_{n\to\infty} g_n(t) \mathrm{d}t.

Visualization

To further illustrate these concepts, we can visualize the behavior of f_n(t) and g_n(t) for various values of n. As n increases, observe how each function behaves and how their areas change.

Code
# Define function f_n(t)
fn <- function(t, n) {
  ifelse(t < 1/n, n * t, ifelse(t < 2/n, 2 - n * t, 0))
}

# Set up time axis
t <- seq(0, 2, length = 1000)

# Plot f_n(t) for n = 5, 20, and 100
plot(t, fn(t, 5), type = "l", col = "blue", ylim = c(0, 1.5),
     main = TeX("Behavior of $f_n(t)$ for $n = 5$, $20$, $100$"),
     ylab = TeX("$f_n(t)$"),
     xlab =  TeX("$t$"))
lines(t, fn(t, 20), col = "red")
lines(t, fn(t, 100), col = "orange")
legend("topright",
       legend = c(TeX("$n = 5$"), TeX("$n = 20$"), TeX("$n = 100$")),
       col = c("blue", "red", "orange"), lty = 1)
Code
# Define function g_n(t)
gn <- function(t, n) {
  ifelse(t < 1/n, n^2 * t, ifelse(t < 2/n, 2 * n - n^2 * t, 0))
}

# Set up time axis
t <- seq(0, 2, length = 1000)

# Plot g_n(t) for n = 5, 20, and 100
plot(t, gn(t, 5), type = "l", col = "blue", ylim = c(0, 40),
     main = TeX("Behavior of $g_n(t)$ for $n = 5$, $20$, $100$"),
     ylab = TeX("$g_n(t)$"), xlab = TeX("$t$"))
lines(t, gn(t, 20), col = "red")
lines(t, gn(t, 100), col = "orange")
legend("topright", legend = c(TeX("$n = 5$"), TeX("$n = 20$"), TeX("$n = 100$")),
       col = c("blue", "red", "orange"), lty = 1)

Figure 2: In these plots, we observe the behavior of both f_n(t) and g_n(t) as n increases. While both functions appear to converge to zero for large n, their areas behave differently. This distinction highlights the important concept that integrating limits and taking limits of integrals are not always interchangeable.

Scale Change and Translation Properties

In the study of Laplace transforms, the scaling and translation properties of the time axis play a significant role. These properties are essential for manipulating functions, allowing us to simplify their transformations and gain deeper insights into their behavior in the frequency domain.

Scaling of the Time Axis

Theorem 2 (Stretching/Expanding the Time Axis) When we stretch or expand the time axis of a function f(t) by a factor a > 0, the corresponding Laplace Transform \hat{f}(s) is modified as follows:

f(at) \mapsto \frac{1}{a} \hat{f}\left(\frac{s}{a}\right).

Proof. The proof directly follows from the definition of the Laplace transform by applying a change of variables. Let at = z, which implies t = \frac{z}{a}:

\begin{aligned} \mathscr{L}\{f(at)\}(s) &= \int_{0^-}^{\infty} f(at)\, \mathrm{d}t \\ &= \int_{0^-}^{\infty} f(z) \, \mathrm{e}^{-s\frac{z}{a}} \, \frac{\mathrm{d}z}{a} \\ &= \frac{1}{a} \, \hat{f}\left(\frac{s}{a}\right) \end{aligned}

Code
# Define the function f(t)
f <- function(t) {
  ifelse(t >= 0, exp(-t) * sin(4 * t), 0)  # Exponential decay for t >= 0
}

# Create a sequence of time values
t_vals <- seq(0, 5, by = 0.01)
a_values <- c(0.5, 1, 2)  # Different stretching factors
data <- data.frame()

# Create data for each stretching factor
for (a in a_values) {
  data <- rbind(data, data.frame(t = t_vals, f_t = f(a * t_vals), a = paste("a =", a)))
}

# Plot with ggplot2 using TeX for labels
ggplot(data, aes(x = t, y = f_t, color = a)) +
  geom_line(linewidth = 1.2) +
  labs(title = TeX("Effect of Stretching the Time Axis on $f(t)$"),
       subtitle = TeX("Visualizing how different stretching factors affect the function"),
       x = TeX("$t$"),
       y = TeX("$f(at)$"),
       color = "Stretching Factor") +
  theme_minimal() +
  theme(text = element_text(size=12),
        plot.title = element_text(hjust = 0.5, size=14),
        plot.subtitle = element_text(hjust = 0.5, size=12))

Figure 3: Visualization of the effect of stretching the time axis of f(t) by a factor a, leading to the corresponding change in the Laplace transform.

Translation in the s and t Axes

The translation property is a crucial aspect of Laplace transforms, affecting both the s axis and the t axis by a coefficient a > 0. These translations lead to the following relationships:

Theorem 3 (Translation in the s and t Axes) \mathrm{e}^{at} f(t) \mapsto \hat{f}(s-a) \qquad \text{and} \qquad f(t-a) \mapsto \mathrm{e}^{-as} \hat{f}(s)

Proof (Translation in the s Axis). To prove the translation property in the s complex variable, we start with the Laplace Transform of the function e^{at} f(t):

\begin{aligned} \mathscr{L}\{\mathrm{e}^{at}f(t)\}(s) &= \int_{0^-}^{\infty} \mathrm{e}^{at} f(t) \mathrm{e}^{-st} \, \mathrm{d}t \\ &= \int_{0^-}^{\infty} f(t) \mathrm{e}^{(a-s)t} \, \mathrm{d}t \\ &= \hat{f}(s-a). \end{aligned}

This shows that multiplying by e^{at} translates the Laplace Transform along the s-axis.

Proof (Translation in the t Axis). For translation along the t-axis, we apply a change of variables with z = t - a:

\begin{aligned} \mathscr{L}\{f(t-a)\}(s) &= \int_{0^-}^{\infty} f(t-a) \, e^{-st} \, \mathrm{d}t \\ &= \int_{-a}^{\infty} f(z) e^{-s(z+a)}\, dz \\ &= e^{-as} \int_{0^-}^{\infty} f(z)e^{-sz}\, dz \\ &= e^{-as} \hat{f}(s). \end{aligned}

In this derivation, the first integral is effectively canceled because we typically assume that f(t) = 0 for t < 0. Therefore, this integral evaluates to zero.

Summary

These properties illustrate how translations in time and frequency domains impact the behavior of functions under Laplace transformations. The translation in the s-axis shifts the transform horizontally, while translation in the t-axis introduces an exponential decay factor.

Code
# Load necessary library
library(ggplot2)

# Define the function f(t) as before
f <- function(t) {
  ifelse(t >= 0, exp(-t), 0)  # Exponential decay for t >= 0
}

# Create a data frame for plotting with time shifts
t_vals <- seq(-2, 5, by = 0.01)
a_shifts <- c(-1, 0, 2)  # Different shifts in time
data_translation <- data.frame()

# Create data for each time shift
for (a in a_shifts) {
  data_translation <- rbind(data_translation,
                            data.frame(t = t_vals, f_t = f(t_vals - a), a = paste("a =", a)))
}

# Plot with ggplot2
ggplot(data_translation, aes(x = t, y = f_t, color = a)) +
  geom_line(linewidth = 1.2) +
  labs(title = TeX("Translation of the Time Axis for $f(t)$"),
       subtitle = TeX("Visualizing how different time shifts affect the function"),
       x = TeX("$t$"),
       y = TeX("$f(t - a)$"),
       color = "Time Shift") +
  theme_minimal() +
  theme(text = element_text(size=12),
        plot.title = element_text(hjust = 0.5, size=14),
        plot.subtitle = element_text(hjust = 0.5, size=12))

Figure 4: Visual representation of the effect of translating the function f(t) in both the s and t axes.

Existence of the Laplace Transform

Not every function can be transformed using the Laplace operator \mathscr{L}. For instance, consider the function f(t) = \mathrm{e}^{t^2}. If we apply the Laplace transform to this function, we obtain the following integral:

\mathscr{L}\{\mathrm{e}^{t^2}\}(s) = \int_{0^-}^\infty \mathrm{e}^{t^2 - st} \, \mathrm{d}t = \int_{0^-}^T \mathrm{e}^{(t-s)t} \, \mathrm{d}t + \int_T^\infty \mathrm{e}^{(t-s)t} \, \mathrm{d}t.

If we choose a constant T > \Re(s), we can observe that the second integral \int_T^\infty \mathrm{e}^{(t-s)t} \, \mathrm{d}t does not converge for any value s \in \mathbb{C}.

Proof. To prove this result, we start with the function f(t) = \mathrm{e}^{t^2}. If its Laplace transform exists (denoted by \hat{f}(s)), it must equal:

\hat{f}(s) = \lim_{M \to \infty} \int_{0^-}^M \mathrm{e}^{t(t-s)} \, \mathrm{d}t.

Since s is a complex variable, we express it as s = a + \mathrm{i}b, where a, b \in \mathbb{R} and \mathrm{i} = \sqrt{-1}. Substituting this into the Laplace transform expression gives:

\mathrm{e}^{t(t-a-\mathrm{i}b)} = \mathrm{e}^{t(t-a)} \mathrm{e}^{-\mathrm{i}bt}.

Using the De-Moire formula 1 we rewrite the exponential with the imaginary term:

\mathrm{e}^{-\mathrm{i}bt} = \cos(bt) - \mathrm{i} \sin(bt).

Thus, the integral becomes:

\lim_{M \to \infty} \int_{0^-}^M \mathrm{e}^{t(t-a)} \left( \cos(bt) - \mathrm{i} \sin(bt) \right) \, \mathrm{d}t.

This expression can be split into real and imaginary parts:

  • Real part: R = \lim_{M \to \infty} \int_{0^-}^M \mathrm{e}^{t(t-a)} \cos(bt) \, \mathrm{d}t.

  • Imaginary part: I = \lim_{M \to \infty} \int_{0^-}^{M} \mathrm{e}^{t(t-a)} \sin(bt) \, \mathrm{d}t.

Focusing on the real part R, we note that it involves the product of a cosine function and an exponentially growing function. By analyzing this product at large values of t, we can see that the integral does not converge. To formalize this observation:

\begin{aligned} \int_0^{M + \Delta M} \mathrm{e}^{t(t-a)} \cos(bt) \, \mathrm{d}t &\geq \int_0^{M} \mathrm{e}^{t(t-a)} \cos(bt) \, \mathrm{d}t \\ &+ \mathrm{e}^{M(M-a)} \int_M^{M+\Delta M} \cos(bt) \, \mathrm{d}t. \end{aligned}

Since the exponential term \mathrm{e}^{t(t-a)} grows without bound as t increases, it follows that the integral does not converge. Consequently, we conclude that the Laplace transform for f(t) = \mathrm{e}^{t^2} does not exist.

Exponential Order Functions

A function f(t) is said to be of exponential order if there exist constants M, N \in \mathbb{R} such that:

|f(t)| \leq M e^{Nt}

for all t \geq T, where T is a finite constant.

This definition implies that the growth of the function f(t) is bounded by an exponential function as t approaches infinity. Specifically, the term M e^{Nt} serves as an upper limit for the absolute value of f(t).

If a function is of exponential order, its Laplace transform exists in at least part of the complex plane. This means that we can find values of s for which the Laplace transform converges, providing valuable insights into the behavior of the function in the frequency domain.

Proof. To demonstrate that functions of exponential order always have a Laplace transform, we focus on the residual part of the integral from T to infinity:

\int_T^\infty |f(t)| \mathrm{e}^{-st} \, \mathrm{d}t.

Using the property |f(t)| \leq M \mathrm{e}^{Nt}, we can write:

\int_T^\infty |f(t)| \mathrm{e}^{-st} \, \mathrm{d}t \leq \int_T^\infty M \mathrm{e}^{Nt} \mathrm{e}^{-st} \, \mathrm{d}t = M \int_T^\infty \mathrm{e}^{(N - a)t} \, \mathrm{d}t,

where s = a + \mathrm{i}b and we focus on the real part a = \Re(s).

For convergence of the integral, we require a > N. Thus, if a > N, the term \mathrm{e}^{(N - a)t} tends to zero as t \to \infty, ensuring that the integral has a finite value.

Consequently, the domain of s for which the Laplace transform of an exponential order function exists is at least:

\Re(s) > N.

This result confirms that functions of exponential order have well-defined Laplace transforms within specific regions of the complex plane.

Code
# Load necessary library
library(ggplot2)

# Define the parameters for the plot
t <- seq(0, 5, length.out = 100)
N <- 2
f <- exp(N * t)

# Create a data frame for ggplot
data <- data.frame(t = t, f_t = f)

# Plot using ggplot2 for better aesthetics
ggplot(data, aes(x = t, y = f_t)) +
  geom_line(color = "red", linewidth = 1.2) +
  labs(title = TeX("Exponential Growth: $f(t) = e^{2t}$"),
       x = TeX("Time ($t$)"),
       y = TeX("Function Value ($f(t)$)")) +
  theme_minimal() +
  theme(text = element_text(size=12),
        plot.title = element_text(hjust = 0.5, size=14))

Figure 5: Exponential Growth of e^{Nt} with N=2

According to the definition, we can also observe that the domain of the variable s for the Laplace transform of a function of exponential order is at least given by:

\forall s = a + \mathrm{i}b \in \mathbb{C}, \quad \text{such that} \quad a = \mathrm{Re}(s) \geq N.

This means that the real part of s must be greater than or equal to N for the Laplace transform to exist. In other words, the Laplace transform converges in the half-plane where the real part of s meets this condition.

Usage of the Laplace Transform

Consider the problem of solving the following differential equation:

\begin{cases} y'(t) = ay(t) + t \\ y(0) = 1 \end{cases}

Solving this equation in the time domain can be challenging. This is where the Laplace transform comes into play, allowing us to analyze the problem in the complex variable s. By applying the Laplace transform to the differential equation, we obtain:

\begin{aligned} \mathscr{L}\{y'(t)\}(s) & = \mathscr{L}\{a y(t) + t\}(s) \\ & = a \mathscr{L}\{y(t)\}(s) + \mathscr{L}\{t\}(s). \end{aligned}

At this point, we need to highlight an important Laplace transform property. This property states that the transform of the first derivative f'(t) of a function f(t) is given by:

f'(t) \mapsto s \hat{f}(s) - f(0^+),

where f(0^+) represents the value of the function f(t) at the origin of the time axis and corresponds to the initial (Cauchy) condition of the ordinary differential equation.

This property allows us to convert differential equations in the time domain into algebraic equations in the complex variable domain. Returning to our initial problem, we can rewrite the differential equation as:

s \hat{y}(s) - \underbrace{1}_{y(0^+)} = a \hat{y}(s) + \underbrace{\frac{1}{s^2}}_{\mathscr{L}\{t\}(s)}.

This simplifies to:

\hat{y}(s) = \frac{1 + \frac{1}{s^2}}{s - a}.

Example 2 (System of Ordinary Differential Equations) Consider the following system of two ordinary differential equations, subject to their initial conditions:

\begin{cases} x'(t) = y(t) + 1 \\ y'(t) = x(t) + \cos t \\ x(0) = 1 \\ y(0) = 1 \end{cases}

To solve this system, we can apply the Laplace transform to convert the equations into the complex variable domain. The Laplace transforms yield the following equations:

\begin{aligned} (i) & \quad \underbrace{s \tilde{x}(s) - x(0)}_{\mathscr{L}\{x'(t)\}(s)} &&= \tilde{y}(s) + \frac{1}{s} \\ (ii) & \quad \underbrace{s \tilde{y}(s) - y(0)}_{\mathscr{L}\{y'(t)\}(s)} &&= \tilde{x}(s) + \frac{1}{1+s^2} \end{aligned}

Knowing the initial conditions of the system allows us to express the system of functions \tilde{x} and \tilde{y} using matrix notation:

\underbrace{\begin{bmatrix} s & -1 \\ -1 & s \end{bmatrix}}_{A(s)} \begin{pmatrix} \tilde{x}(s) \\ \tilde{y}(s) \end{pmatrix} = \begin{pmatrix} 1 + \dfrac{1}{s} \\ \dfrac{1}{1+s^2} \end{pmatrix}

To find explicit values for \tilde{x}(s) and \tilde{y}(s), we can invert the matrix A(s):

\tilde{x}(s) = \dfrac{s + 1 - \dfrac{1}{1+s}}{s^2 - 1}, \qquad \tilde{y}(s) = \dfrac{\dfrac{s}{1+s} - 1 - \dfrac{1}{s}}{s^2 - 1}.

Example 3 (Computation of the Laplace Transform) Consider the ordinary differential equation:

\begin{cases} x'(t) + x(t) = \cos t \\ x(0) = 2 \end{cases}

To find the solution in the Laplace domain, we apply the Laplace transform operator \mathscr{L} to the entire equation. We utilize known transforms, specifically:

\begin{aligned} \mathscr{L}\{x(t)\}(s) &= \hat{x}(s), \\ \mathscr{L}\{x'(t)\}(s) &= s \hat{x}(s) - x(0), \\ \mathscr{L}\{\cos(t)\}(s) &= \frac{s}{s^2 + 1}. \end{aligned}

Applying the Laplace transform to the differential equation yields:

\mathscr{L}\{x'(t) + x(t)\}(s) = \mathscr{L}\{\cos t\}(s),

which translates to:

s \hat{x}(s) - 2 + \hat{x}(s) = \frac{s}{s^2 + 1}.

Now, we can combine terms to solve for \hat{x}(s):

(s + 1) \hat{x}(s) - 2 = \frac{s}{s^2 + 1}.

Rearranging gives us:

(s + 1) \hat{x}(s) = \frac{s}{s^2 + 1} + 2.

Finally, solving for \hat{x}(s) results in:

\hat{x}(s) = \frac{s}{(s+1)(s^2 + 1)} + \frac{2}{s+1} = \frac{2s^2 + s + 2}{(s+1)(s^2 + 1)}.

This expression represents the Laplace transform of the solution to the original ordinary differential equation.

Inversion of the Laplace Transform

Partial Fraction Expansion

The inversion of a Laplace transform is a crucial step that allows us to transport the algebraic solution found in the s domain back into a time-domain description. This process is represented by the operator \mathscr{L}^{-1}.

Rational Polynomial Representation

When dealing with (systems of) ordinary differential equations, the result of the Laplace transform (such as \hat{x}(s)) is typically expressed as a rational function in the form:

\frac{P(s)}{Q(s)},

where P(s) and Q(s) are polynomials with real coefficients in the variable s.

An important observation is that, in many practical applications, the degree of the numerator polynomial \partial P is less than that of the denominator polynomial \partial Q. If this condition does not hold naturally, we can apply polynomial long division. Specifically, if P(s) can be expressed as:

P(s) = T(s)Q(s) + R(s),

where T(s) is the quotient and R(s) is the remainder, we can rewrite the rational function as:

\frac{P(s)}{Q(s)} = T(s) + \frac{R(s)}{Q(s)}.

The term represented by the polynomial T(s), which has real coefficients, corresponds to the Laplace transform of Dirac delta functions. For example:

\begin{aligned} \mathscr{L}\{\delta(t)\}(s) & = \int_{0^-}^\infty \delta(t) e^{-st} \, \mathrm{d}t = e^{-st} \Big|_{t=0} = 1, \\ \mathscr{L}\{\delta'(t)\}(s) & = -\int_{0^-}^\infty \delta'(t) e^{-st} \, \mathrm{d}t = -\frac{\mathrm{d}}{\mathrm{d}t}\left(e^{-st}\right) \Big|_{t=0} = s. \end{aligned}

Inverting the Transform

Assuming we have a transform solution expressed as a simplified rational polynomial

\frac{P(s)}{Q(s)} \quad\textrm{such that}\quad \partial P < \partial Q

we can compute its inverse to express it in the time domain. This is commonly done using partial fraction expansion, which allows us to rewrite the solution as a sum of simpler elements that can be easily inverted using a Laplace transform table.

Steps for Partial Fraction Expansion

To properly perform partial fraction expansion, it is essential to rewrite the denominator of the polynomial ratio in factored form:

\frac{P(s)}{Q(s)} = \frac{b_0 + b_1 s + b_2 s^2 + \dots + b_m s^m} {(s - p_1)^{m_1} (s - p_2)^{m_2} \cdots (s - p_n)^{m_n}},

where p_i are the roots of the denominator and each root has a multiplicity m_i.

Depending on the multiplicity and nature (real or complex conjugate) of these roots, the procedure for performing partial fraction expansion may vary.

In the following sections, we will present examples illustrating each case of partial fraction expansion based on different root types and multiplicities.

Real roots

Single real root

Let’s consider the following rational polynomial with no multiple roots:

G(s) = \frac{s}{(s-1)(s+3)(s-4)}

it’s factorization is based on determining the coefficients \alpha_i such that

(i): \qquad \frac{\alpha_1}{s-1} + \frac{\alpha_2}{s+3} + \frac{\alpha_3}{s-4} = \frac{s}{(s-1)(s+3)(s-4)}

This can be accomplished in two ways. The first method involves computing the sum of the three basic elements in a parametric form that depends on \alpha_i, and then solving the resulting linear system to determine the parameters that satisfy the equality.

A second, more efficient approach is to recognize that each coefficient can be determined as follows:

\alpha_i = \lim_{s\rightarrow p_i} \big(s-p_i\big) G(s)

In the context of the example, to compute the coefficient \alpha_1 associated with the root p = 1, we can multiply relation (i) by the factor (s - 1). This yields the following equation:

\frac{s\cancel{(s-1)}}{\cancel{(s-1)}(s+3)(s-4)} = \frac{\alpha_1}{\cancel{s-1}} \cancel{(s-1)} + {(s-1) \left( \frac{\alpha_2}{s+3} + \frac{\alpha_3}{s-4} \right)}

Evaluating this expression at the point s = 1 will yield an equation in which the only unknown coefficient is \alpha_1. This occurs because the coefficients \alpha_2 and \alpha_3 are effectively eliminated due to the multiplication by (s - 1), which equals zero at this point. Therefore, we have:

\xrightarrow{s=1} \quad \alpha_1 = \frac{1}{(1+3)(1-4)} = -\frac 1 {12}

By applying the same method to the other two roots, -3 and 4, we can derive the coefficients:

  • For the root -3, we obtain \alpha_2 = -\frac{3}{28}.
  • For the root 4, we find \alpha_3 = \frac{4}{21}.

With these coefficients in hand, we can now verify that:

-\frac{1}{12} \frac 1 {s-1} -\frac 3 {28} \frac 1 {s+3} + \frac 4 {21} \frac{1}{s-4} = \frac{s}{(s-1)(s+3)(s-4)}

Each basic element can be easily transform using the Laplace table, and in particular

\mathscr{L}^{-1}\{G(s)\}(t) \ : \quad -\frac 1 {12} \mathrm{e}^{-t} - \frac 3{28} \mathrm{e}^{3t} + \frac 4 {21} \mathrm{e}^{-4t}

Multiple real roots

Let’s now consider a case where the rational polynomial has multiple roots, specifically:

G(s) = \frac{1+s}{(s-4)^4}.

In this scenario, the partial fraction decomposition should not consist of four identical terms with the denominator s - 4. Instead, we need to consider four distinct exponential terms that increase in complexity, represented as follows:

(ii) \qquad \frac{\alpha_1}{s-4} + \frac{\alpha_2}{(s-4)^2} + \frac{\alpha_3}{(s-4)^3} + \frac{\alpha_4}{(s-4)^4} = \frac{1+s}{(s-4)^4}

This problem can be addressed by summing all the elements and solving the resulting linear system for the coefficients \alpha_i. However, a more efficient approach is to recognize that the element a_{n-k}—where n denotes the multiplicity of the root (in this case, n = 4)—can be expressed as follows:

\frac{1}{k!} \lim_{s\rightarrow p} \frac{\mathrm{d}^k}{\mathrm{d}s^k} \big[ (s-p)^n G(s) \big]

This process becomes clearer when we consider the example provided. The first step is to multiply equation (ii) by the polynomial (s - 4)^4, which leads us to the following relationship:

1+s = \alpha_1(s-4)^3 + \alpha_2(s-4)^2 + \alpha_3(s-4) + \alpha_4

Evaluating this expression at s = 4 allows us to determine the coefficient \alpha_4 = 5. To find the other coefficients, we can employ the proposed scheme, which involves differentiating the previous equation with respect to s and then evaluating it at the point s = 4.

\begin{aligned} & \downarrow \mathrm{d}/\mathrm{d}s \\[1em] 1 & = 3 \alpha_1 (s-4)^2 + 2 \alpha_2(s-4) + \alpha_3 \quad & \xrightarrow{s=4} \quad \alpha_3 = 1 \\[1em] & \downarrow \mathrm{d}/\mathrm{d}s \\[1em] 0 & = 6 \alpha_1 (s-4) + 2 \alpha_2 \qquad & \xrightarrow{s=4} \quad \alpha_2 = 0 \\[1em] & \downarrow \mathrm{d}/\mathrm{d}s \\[1em] 0 & = 6 \alpha_1 (s-4) \quad & \xrightarrow{s=4} \quad \alpha_1 = 0 \end{aligned}

Once the coefficients \alpha_i associated with the partial fraction expansion have been determined, we can compute the inverse transform of the original function G(s), which is given by:

\begin{aligned} \mathscr{L}^{-1}\{G(s)\}(t) &= \mathscr{L}^{-1}\left\{\frac{1}{(s-4)^3}\right\} +5 \mathscr{L}^{-1}\left\{\frac{1}{(s-4)^4} \right\} \\[1em] &= \frac{1}{2}\mathrm{e}^{4t}t^2 + 5 \frac{1}{6} \mathrm{e}^{4t}t^3 \end{aligned}

Example 4 (partial fraction expansion with different roots, one which is multiple) Let’s consider the following rational polynomial G(s) that’s already been split in the factor for the partial expansion:

\begin{aligned} G(s) & := \frac{1+s}{(s-1)(s+2)^3} \\ & = \frac{A}{s-1} + \frac{B}{s+2} + \frac{C}{(s+2)^2} + \frac{D}{(s+2)^3} \end{aligned}

To calculate the first coefficient A associated with the root s = 1, we can follow these straightforward steps:

  1. Multiply by a Factor: Multiply the last equation by the factor (s - 1)(s + 2)^2. This step isolates the behavior of the function around the specified root.

  2. Evaluate at s = 1: After performing the multiplication, evaluate the resulting expression at s = 1. This evaluation will yield the value of the coefficient A.

By following these steps, we can effectively determine the coefficient A related to the root s = 1.

\begin{aligned} 1+s & = A(s+2)^3 + (s+1) \Big[ B(s+2)^2 + C(s+2) + D \Big] \\ & \downarrow\quad s=1 \\ 2 & =A 3^3 \qquad \implies \quad A = \frac 2 {27} \end{aligned}

The next step is to calculate the remaining coefficients B, C, D associated with the multiple root s = -2. However, the current form of the equation does not allow us to apply the previously demonstrated method effectively.

Deflating the Equation

To create a suitable equation for our analysis, we need to deflate it. This involves expanding the term (s + 2)^3, which corresponds to the coefficient A. By expanding this term and moving it to the left-hand side of the equation, we can simplify our calculations and isolate the coefficients B, C, D more effectively.

\begin{aligned} & 1+s - \frac{2}{27} \Big( s^2 + 6s^2 + 12s +8 \Big) \\ & \qquad = (s+1) \Big[ B(s+2)^2 + C(s+2) + D \Big] \\ & \qquad = \frac{1}{27} \Big( -2s^3 -12s^2+35s +11 \Big) \end{aligned}

Once we have deflated the equation properly, we can proceed with determining the coefficients associated with the multiple root. This will provide clarity on the polynomial’s behavior around s = -2.

At this stage, we can divide both sides of the equation by the factor s-1. If division on the left-hand side is not possible, it indicates that an error has occurred in the calculations leading up to this point. By performing polynomial division, we can simplify the equation and obtain the following result:

\frac{1}{27} \left( -2s^2 -14s -11 \right) = B(s+2)^2 + C(s+2) + D

From this point onward it’s possible to use the method shown to calculate the coefficient of the multiple roots; starting evaluating the previous expression at point s=-2 we get the value

D = \frac{-8+28-11}{27} = \frac{1}{3}

Continuing with differentiation we get

\begin{aligned} & \downarrow \mathrm{d}/\mathrm{d}s \\[1em] \frac{-4s -14}{27} & = 2B(s+2) + C \quad & \xrightarrow{s=-2} \quad C & = -\frac 2 9 \\[1em] & \downarrow \mathrm{d}/\mathrm{d}s \\[1em] \frac{-4}{27} & = 2B & \xrightarrow{s=-2} \quad B & = -\frac{2}{27} \end{aligned}

To end the inversion of the starting transform G(s) we can see that

\begin{aligned} \mathscr{L}^{-1}\{G(s)\}(t) & = \frac{2}{27} \mathscr{L}^{-1}\left\{\frac{1}{s-1}\right\}(t) - \frac{2}{27} \mathscr{L}^{-1}\left\{\frac{1}{s+2}\right\}(t) \\[1em] & - \frac{2}{9} \mathscr{L}^{-1}\left\{\frac{1}{(s+2)^2}\right\}(t) + \frac{1}{3} \mathscr{L}^{-1}\left\{\frac{1}{(s+2)^3}\right\}(t) \\[1em] & = \frac{2}{27} \mathrm{e}^t - \frac{2}{27}\mathrm{e}^{-2t} - \frac{2}{9} \mathrm{e}^{-2t}t + \frac{1}{3} \cdot \frac{1}{2}\mathrm{e}^{-2t} t^2 \end{aligned}

Complex roots

Handling complex roots in the denominator Q(s) of the solution to an ordinary differential equation presents greater computational challenges compared to dealing with real roots. One important observation is that if the complex value

\alpha = a +\mathrm{i}b \in\Bbb{C}

is a root of the denominator, so that Q(\alpha) = 0. It’s conjugate

\overline{\alpha} = a - \mathrm{i}b

is also a root, i.e. Q(\overline{\alpha}) = 0. Considering in fact that the polynomial denominator Q(s), evaluated in the point s = \alpha, can be expressed as

Q(\alpha) = q_0 + \alpha q_1 + \alpha^2 q_2 + \dots + \alpha^n q_n = 0

where q_i are real coefficients. At this point we have to consider the following property of the conjugate that are true for all couple of complex variables \alpha,\beta:

\begin{split} (i) & \qquad \overline{\big( \alpha + \beta \big)} = \overline{\alpha} + \overline{\beta} \\ (ii) & \qquad \overline{\big(\alpha \beta\big)} = \overline{\alpha}\overline{\beta} \\ (iii) & \qquad \alpha \overline{\alpha} = |\alpha|^2 \end{split}

Now known that Q(\alpha) = 0, than also the conjugate \overline{Q(\alpha)} should be equal to zero (because \overline{0} = 0) and so by applying the properties we can see that

\begin{aligned} 0 & = Q(\alpha) = \overline{Q(\alpha)} = \overline{q_0 + \alpha q_1 + \alpha^2 q_2 + \dots + \alpha^n q_n} \\ & = \overline{q_0} + \overline{\alpha} \overline{q_1} + \overline{\alpha^2q_2} + \dots + \overline{\alpha^nq_n} \\ & = q_0 + \overline{\alpha} q_1 + \overline{\alpha}^{2} q_2 + \dots + \overline{\alpha}^{n} q_n = Q\big(\overline{\alpha}\big) \end{aligned}

This verifies that if \alpha is a root of Q(s), than also it’s conjugate \overline{\alpha} is a root of the same polynomial and so we can rewrite the denominator in the form

Q(s) = \dots \underbrace{\big(s-\alpha\big) \big(s-\overline{\alpha}\big)}_\textrm{conj. roots} \dots

Single conjugated complex roots

Let’s now assume that we have a rational polynomial that present two conjugated complex roots expressed in the form

\frac{P(s)}{Q(s)} = \frac{P(s)}{(s-\alpha)(s-\overline{\alpha})} = \frac A {s-\alpha} + \frac{B}{s-\overline{\alpha}}

By trying to apply the partial fraction expansion as for the real root previously described, we can demonstrate that the two coefficients A and B are conjugated, so such that B = \overline{A}. In fact by summing the two elements we can see that

\begin{aligned} \frac A {s-\alpha} + \frac{B}{s-\overline{\alpha}} & = \frac{A(s-\overline{\alpha}) + B(s-\alpha)}{(s-\alpha)(s-\overline{\alpha})} \\ &= \frac{(A+B)s - (A \overline{\alpha}+ B\alpha)}{(s-\alpha)(s-\overline{\alpha})} \end{aligned}

By expanding the definition of the complex variable (so expanding A = a+\mathrm{i}b, B = c+\mathrm{i}d…) and considering that the polynomial P(s) at the numerator has only real coefficients it’s possible to see that, in order to verify the equality, it must be a = c and b = - d, and so the two complex values A and B must be conjugated.

Proven that B = \overline{A} now the problem is to find such complex value A in order to satisfy the partial fraction expansion; in order to do so we can try to use the same approach shown for real roots by multiplying both members of the equation by Q(s) and then evaluating the result for s = \alpha:

\frac{P(s)}{(s-\alpha)(s-\overline{\alpha})} = \frac A {s-\alpha} + \frac{B}{s-\overline{\alpha}}

\xrightarrow[Q(s)]{\times} \qquad P(s) = A(s-\overline{\alpha}) + \overline{A}(s-\alpha)

\implies \qquad P(\alpha) = A (\alpha-\overline{\alpha}) + \cancel{\overline{A}(\alpha-\alpha)}

Considering now that \alpha-\overline{\alpha} conresponds to a purely imaginary number equals 2 times the imaginary part of the root \alpha, we can invert that relation in order to explicitly get A:

A =\frac{P(\alpha)}{2\mathrm{i}\Im(\alpha)} = -\mathrm{i}\frac{P(\alpha)}{2\Im(\alpha)}

Let’s now try to consider a real case scenario where we want to deal with the inversion of the following rational polynomial (that’s consequently expanded):

\frac{s}{ \big( s - (1+\mathrm{i}) \big)\big( s - (1-\mathrm{i}) \big) } = \frac A {s - (1+\mathrm{i})} + \frac{\overline{A}}{s-(1-\mathrm{i})}

We can now multiply both members of the equation by the denominator Q(s) and then evaluate the result for the point s = 1+\mathrm{i} (root of the denominator):

\begin{aligned} s &= A \big(s-(1-\mathrm{i})\big) +\overline{A}\big(s-(1+\mathrm{i})\big) \\[1em] & \downarrow\quad s = 1+\mathrm{i}\\[1em] 1+\mathrm{i} & = A \big(1+\mathrm{i} - 1 + \mathrm{i}\big) \\[2em] \implies \qquad A & = \frac{1+\mathrm{i}}{2\mathrm{i}} = \frac{1}{2}\big(1 - \mathrm{i} \big) \end{aligned}

With this value retrieved we can now state that

\frac{s}{ \big( s - (1+\mathrm{i}) \big)\big( s - (1-\mathrm{i}) \big) } = \frac{1}{2}\left(\frac {1-\mathrm{i}} {s - (1+\mathrm{i})} + \frac {1+\mathrm{i}}{s-(1-\mathrm{i})} \right)

Unfortunately watching the Laplace table we can see there’s no possible match between the known transform (where the numerator is expressed as a pure real number) and the complex coefficient retrieved. But an interesting fact that we can note is that by merging the two components we can get a rational polynomial with real coefficient: in a general way we can in fact state that

\begin{aligned} \frac A {s-\alpha} + \frac{\overline{A}}{s-\overline{\alpha}} & = \frac{A(s-\overline{\alpha}) + \overline{A}(s-\alpha)}{(s-\alpha)(s-\overline{\alpha})} \\[1em] & = \frac{ s\big(A+\overline{A}\big) - A\overline{\alpha} - \overline{A}\alpha }{s^2 - s \big(\alpha+\overline{\alpha}\big) + |\alpha|^2} \end{aligned}

This rational polynomial can be matched with the following transform:

\begin{aligned} \mathscr{L}\{C\mathrm{e}^{at}\cos(\omega t) + D \mathrm{e}^{at}\sin(\Omega t)\} & =C \frac{s-a}{(s-a)^2+\omega^2} + D \frac{\omega}{(s-a)^2 + \omega^2} \\ & = \frac{Cs - \big(D\omega - Ca\big)}{(s-a)^2 + \omega^2 } \end{aligned}

Given now the values of \alpha,A, the problem is to find the expression of the coefficients a,\omega,C and D; in order to to so we can perform two steps:

  1. match the denominator in order to find a and \omega; in order to compare the denominator it’s useful to expand the denominator (s-a)^2 + \omega^2 = s^2 - 2sa + a^2 + \omega^2. At this point by relating the function we can see that \cancel {s^2} - s \big(\alpha+\overline{\alpha}\big) + |\alpha|^2 = \cancel{s^2} - 2sa + a^2 + \omega^2

    \begin{aligned} & \implies \quad \left\{\begin{aligned} -2sa &= - s\big(\alpha + \overline{\alpha}\big) \\ \alpha^2+\omega^2 &= |\alpha|^2 \end{aligned}\right. \quad \implies \quad a = \frac{\alpha + \overline{\alpha}}{2} = \Re(\alpha) \\[1em] & \implies \quad \omega = \sqrt{|\alpha|^2-a^2} = \sqrt{|\alpha|^2-\Re(\alpha)^2} = \sqrt{\Im(\alpha)^2} = \pm \Im(\alpha) \end{aligned}

  2. at this point, known a and \omega, it’s possible to match the numerator in order to get the coefficients C and D: s\big(A+\overline{A}\big) - A\overline{\alpha} - \overline{A}\alpha = Cs - \big(D\omega - Ca\big)

    \begin{aligned} \implies & \quad \begin{cases} A + \overline{A} = C \\ -A\overline{\alpha} - \overline{A}\alpha = D\omega - Ca \end{cases} \\[2em] \implies & \quad C = A+\overline{A} = 2 \Re(\alpha) \\[1em] \implies & \quad D = \frac{Ca - A \overline{\alpha} - \overline{A} \alpha}{\omega} \end{aligned}

Example 5 (inversion of a transform with two conjugated complex roots) Let’s consider the following rational polynomial that has to be inverted:

\frac{s+1}{s^2+s+10}

At this point we can verify that the roots of the denominator are complex conjugated by simply applying the quadratic formula to retrieve them

s_{1,2} = \frac{-1\pm\sqrt{-9}}{2}.

At this point the problem of the inversion if finding the parameters a,\omega,C,D such that

\frac{s+1}{s^2+s+10} = C \frac{s-a}{(s-a)^2+\omega^2} + D \frac{\omega}{(s-a)^2 + \omega^2}

By comparing the denominators we can observe that -2sa = s and so a = -\frac{1}{2} and so

a^2+ \omega^2 = 10 \qquad \implies \quad \omega = \sqrt{10-a^2} = \frac{\sqrt{39}}{2}

At this point it’s possible to compare the denominator (considering that now a,\omega are known) that’s lead to

s + 1 = C\left( s - a\right) + D\omega \quad \implies \quad C = 1,\quad D = \frac{1+Ca}{\omega} = \frac{1} {\sqrt{39}}

Retrieved all the coefficient we can explicitly state the inversion of the original rational polynomial:

\mathscr{L}^{-1}\left\{\frac{s+1}{s^2+s+10} \right\}(t) = \mathrm{e}^{-\frac 1 2t} \left[ \cos\left( \frac{\sqrt{39}}{2} t \right) + \frac{1}{\sqrt{39}} \sin\left( \frac{\sqrt{39}}{2} t \right) \right]

Multiple complex roots

Dealing with multiple complex roots requires even more computational steps; let’s consider the generic case of a multiple complex root of multiplicity of 2 that’s already expanded:

\frac {P(s)}{(s-\alpha)^2(s-\overline{\alpha})^2} = \frac{A}{s-\alpha} + \frac{B}{(s-\alpha)^2} + \frac{\overline{A}}{s-\overline{\alpha}} + \frac{\overline{B}}{(s-\overline{\alpha})^2}

As in all the previously cases in order to compute the first parameter B it’s possible to multiply both sides of the equation by the denominator Q(s) = (s-\alpha)^2(s-\overline{\alpha})^2 and then evaluate the relation at the root s = \alpha:

\begin{aligned} P(s) & = A (s-\alpha)(s-\overline{\alpha})^2 + B (s-\overline{\alpha})^2 \\ & + \overline{A}(s-\overline{\alpha})(s-\alpha)^2 + \overline{B}(s-\alpha)^2 \end{aligned}

\implies \quad P(\alpha) = B \underbrace{\big(\alpha-\overline{\alpha}\big)}_{=2\Im(\alpha)}^2 \qquad \implies \quad B = \frac{P(\alpha)}{4 \Im(\alpha)^2}

Now in order to calculate the second parameter A of the relation we have to deflate te elements associated to B and then, as for the multiple real roots, compute the derivative in respect to the variable s:

\begin{aligned} & \,\downarrow \textrm{deflation} \\[1em] P(s) - B(s-\overline{\alpha})^2 - \overline{B}(s-\alpha)^2 & = A(s-\alpha)(s-\overline{\alpha})^2 + \overline{A}(s-\overline{\alpha})(s-\alpha)^2 \\[1em] & \, \downarrow \ \mathrm{d}/\mathrm{d}s \\[1em] P'(s) - 2B\big(s-\overline{\alpha}\big) - 2\overline{B}\big(s-\alpha\big) & = A \big(s-\overline{\alpha}\big) \Big[ s-\overline{\alpha} + s \big(s-\alpha\big) \Big] \\[1em] & \ + \overline{A} \big(s-\alpha\big)\Big[ s-\alpha + 2\big(s-\overline{\alpha}\big) \Big] \end{aligned}

By now evaluating the expression at the point s = \alpha it’s possible to have a clear definition of the complex variable A:

P'(\alpha) - 2B \big(s-\overline{\alpha}\big) = A \big(\alpha-\overline{\alpha}\big)^2 \quad \implies \quad A = \frac{P'(\alpha) - 2B\big(s-\overline{\alpha}\big)}{4\textrm{Im}^2(\alpha)}

Example 6 (inversion of a transform with multiple complex roots) Let’s consider the following rational transform that has to be inverted:

\frac{s^2}{\big(s^2 + s + 1\big)^2}

The first thing to notice that the multiple roots are complex because calculating their value by using the quadratic formula it happens that

s_{1,2} = \frac{-1 \pm \sqrt{1-4}}{2} = -\frac{1}{2} \pm\mathrm{i}\frac{\sqrt{3}}{2}

For sake of semplicity of representation of the equation, from now on these roots value will be reported as z = -\frac{1}{2} + \frac{\sqrt{3}}{2} and \overline{z} = -\frac{1}{2}- \frac{\sqrt{3}}{2}. At this point it’s possible to express the parametric partial fraction expansion of the transform as

\frac{s^2}{\big(s^2 + s + 1\big)^2} = \frac{A}{s-z} + \frac{\overline{A}}{s-\overline{z}} + \frac{B}{(s-z)^2} + \frac{\overline{B}}{(s-\overline{z})^2}

In order to retrieve the value of the coefficient B we multiply both the sides by the factor

Q(s) = (s^2+s+1)^2 = (s-z)^2(s-\overline{z})^2

that, evaluated at the point s=z, will give the expression

z^2 = B \underbrace{\big(z-\overline{z}\big)}_{i\sqrt 3}^2 \quad \implies \quad B = -\frac{z^2}{3} = - \frac{\left(-\frac{1}{2} + \mathrm{i}\frac{\sqrt{3}}{2}\right)^2}{3} = - \frac{1}{6} - \mathrm{i} \frac{\sqrt{3}}{6}

By deflating the term presenting the parameter B just calculate we can get the relation

\begin{aligned} s^2 - B(s-\overline{z})^2 - \overline{B}(s-z)^2 &= A(s-z)(s-\overline{z})^2 + \overline{A}(s-\overline{z})(s-z)^2 \\[1em] & \downarrow\quad \mathrm{d}/\mathrm{d}s \\[1em] 2s -2B(s-\overline{z}) - 2\overline{B}(s-z) &= A(s-\overline{z})^2 + \overline{A}(s-z)^2\\ & +2(A + \overline{A})(s-z)(s-\overline{z}) \end{aligned}

Having computed the derivative of the relation in respect to the variable s it’s possible to compute the parameter A just by evaluating the result in the point s = z:

2z - 2B\big(z-\overline{z}\big) = A \big(z-\overline{z}\big)^2 \quad \implies \quad A = \frac{4iB \Im(z)-2z}{4\Im(z)^2} = - \frac{2\mathrm{i}}{3\sqrt 3}

Parametric solution for multiple complex roots

To solve the partial fraction expansion of multiple complex roots it’s possible to use a parametric formula where the numerator are defined as polynomials A,B in a generic variable t; in particular for the double complex root the expansion can be expressed as

\frac{P(s)}{\big(s^2-as +t\big)^2 Q(s)} = \frac{A'(t)s + B'(t)}{s^2-as+t} - \frac{A(t)s + B(t)}{\big(s^2-as+t\big)^2} + \frac{q_t(s,t)}{Q(s)}

where the term q_t(s,t) represent the derivative \partial q/\partial t of the remaining term on the numerator after the partial fraction expansion. If the polynomial has 3 complex roots than the parametric decomposition becomes

\begin{aligned} 2\frac{P(s)}{\big(s^2-2as +t\big)^3 Q(s)} & = 2\frac{A(t)s + B(t)}{\big(s^2-2as+t\big)^3} - 2\frac{A'(t)s + B'(t)}{\big(s^2-2as+t\big)^2} \\ & + \frac{A''(t)s + B''(t)}{s^2-as+t} + \frac{q_{tt}(s,t)}{Q(s)} \end{aligned} \tag{1}

where q_{tt}=\partial^2q/\partial t^2.

This process can be better seen by considering a numerical example like the one that follows:

\begin{aligned} \frac{s^2+1}{(s-1)(s^2-2s+2)^3} & = \frac{A}{s-1} + \frac{B_1s + B_2}{(s^2-2s+2)^3} \\[1em] &+\frac {C_1s + C_2}{(s^2-2s+2)^2} + \frac{D_1s+D_2}{s^2-2s+2} \end{aligned}

The first coefficient A of the partial fraction expansion (associated to a single real root) can be determined easily with the methods previously described and so

A= \left. \frac{s^2+1}{(s^2-2s+2)^3} \right|_{s=1} = 2

At this point it’s important to acknowledge hot compute the parametric functions A,B,q; in particular this can be done by writing the following relation:

\frac{s^2+1}{(s-1)(s^2-2s+2)} = \frac{A(t)s+B(t)}{s^2-2s+t} + \frac{q(t)}{s-1}

We can note that the left hand side is equal to the original rational polynomial with the only difference that the term s^2-2s+2 is now considered once; the right hand side is instead the correct partial fraction expansion of the other side with the difference that the coefficient A,B,q are now function of the variable t and the last known coefficient of the complex root (in this case the +) has been replaced with the variable t.

By multiplying both member of the equation by the denominator of the left hand side we derive the following equation:

s^2+1= \Big(A(t)s + B(t)\Big) (s-1) + q(t) \big(s^2-2s+t\big)

This expression allows now us to compute the parametric form of the function A,B,q; in particular considering that s^2-2s+t= 0 and so s^2 = 2s-t we can rewrite the previous expression as

2s-t+1 = A(t)(2s-t) - A(t) s + B(t)s -B(t) + \cancel{q(t) \big(2s-t - 2s +t\big)}

\begin{aligned} & \implies \quad \left\{\begin{aligned} 2 &= A(t) + B(t) \\ -t+1 &= -tA(t) - B(t) \end{aligned}\right. \\ &\implies \quad A(t) = 1 - \frac{2}{t-1} \ , \quad B(t) = 1 + \frac{2}{t-1} \end{aligned}

In order to have the complete the partial fraction expansion we need to compute the derivative (up to the second order) and evaluate them for t=3 (noting that $A(2) = -1 $ and B(2) = 3):

\begin{aligned} A' & = \left.\frac{2}{(t-1)^2}\right|_{t=2} = 2 & \quad B' & = \left.-\frac{2}{(t-1)^2}\right|_{t=2} = 2 \\ A'' & = \left. -\frac 4{(t-1)^3} \right|_{t=2} = -4 &\quad B'' & = \left. \frac 4{(t-1)^3} \right|_{t=2} = 4 \end{aligned}

With all the coefficient defined is possible to express numerically the partial fraction expansion by using Equation 1:

\frac 2 {s-1} + \frac{-s + 3}{(s^2-2s+2)^3} -2 \frac{s-1}{(s^2-2s+2)^2} + \frac 12 \frac{-4s+4}{s^2-2s+2}

Example 7 (computation of a partial fraction expansion) Considering the rational polynomial in s

\begin{aligned} \xi: \quad \frac{P(s)}{Q(s)} & = \frac{s(s^2+7)}{(s-1)(s+1)^3} \\ & = \frac A {s-1} + \frac B {s+1} + \frac C{(s+1)^2} + \frac D{(s+1)^3} \end{aligned}

finding the partial fraction expansion means determining the coefficients A,B,C,D \in\Bbb{R} that satisfies the equation. In order to solve this problem we can define \zeta as the multiplication of \xi by Q(s) and so

\zeta: \quad s(s^2+7) = A (s+1)^3 + B (s-1)(s+1)^2 + C (s-1)(s+1) + D (s-1)

In order to find some coefficient we can note that such polynomial equation must be verified if we evaluate \zeta for some point, in particular for s = 1 (that’s a root of Q) we have 8 = 2^3 A that determines A=1, while if we evaluate for s = -1 we retrieve -8 = -2D and so D=4. We can so inflate \zeta by moving on the left hand side the associated terms on A and D determining the rewritten expression:

\begin{aligned} \zeta_r: \quad & s^3 + 7s - A (s^3 + 3s^2 + 3s + 1) - D(s-1) \\ & \qquad\qquad = B (s-1)(s+1)^2 + C (s-1)(s+1) \\[1em] & s^3 + 7s -s^3 -3s^2 - 3s - 1 -4s + 4 \\ & \qquad\qquad = B (s-1)(s^2+2s +1) + C (s^2-1) \\[2em] & -3s^2 + 3 = B (s^3 + s^2 -s - 1) + C(s^2-1) \end{aligned}

In order to determine the value of the other coefficient B,C we can start deriving \zeta in the variable s:

\zeta' = \frac{\mathrm{d}\zeta}{\mathrm{d}s}: \qquad -6s = B(3s^2 + 2s -1) + 2Cs

Evaluating this expression for s = -1 (associated to the root of both B and C) we have that 6 = -2 C s determining C=-3; after inflation we so have

\begin{aligned} \zeta'_r&: \qquad 0 = B (3s^2+2s-1) \\[1em] \zeta'' = \frac{\mathrm{d}\zeta'}{\mathrm{d}s}&: \qquad 0 = B(6s + 2) \end{aligned}

With the last evaluation on s=-1 we obtain 0 = -4B requiring B=0; with that said the overall partial fraction expansion is

\frac{s(s^2+7)}{(s-1)(s+1)^3} = \frac 1 {s-1} - \frac 3{(s+1)^2} + \frac 4{(s+1)^3}

Boundary value problem

Usually some differential equation problems requires some condition that are not defined at the initial time t=0 of the time axes, but in a more general point t=t^\star: in order to solve this problem a parametric approach is recomended. As example let’s consider the following problem:

\begin{cases} x''(t) + y'(t) = \sin(t) \\ y'(t) + x'(t) = 1 + \sin(t) \\ x(0) = 0 , \quad y(0) = -1,\quad x(1) = 1 \end{cases}

As we can see the last condition constrains the solution x(t),y(t) at the time t=1; in order to solve the problem in the Laplace domain we apply the transform on the system of differential equation and so

\begin{cases} s^2 \hat x(s) - x'(0) - s x(0) + s\hat y(s) - y(0) = \frac 1 {s^2+1} \\ s \hat y(s) - y(0) + s\hat x(s) - x(0) = \frac 1 s + \frac{1}{s^2+1} \end{cases}

If we now try to substitute the boundary condition on the problem we can see that we have no information for x'(t) at the initial time: the idea now is to solve the problem parametrically (assign a variable, in this case A, to the initial condition x'(0) ) and solve the problem this way. At the and we have to find the constant value A such that x(1) = 1. So by substituting we get

\begin{cases} s^2 \hat x(s) - A + s\hat y(s) +1 = \frac 1 {s^2+1} \\ s \hat y(s) +1 + s\hat x(s) = \frac 1 s + \frac{1}{s^2+1} \end{cases}

To simplify the calculation of the algebraic solution of the functions \hat x,\hat y we rewrite the system in matrix form that, by inversion, can determine the solutions (by using the Kramer method):

\begin{bmatrix} s^2 & s \\ s & s \end{bmatrix} \begin{pmatrix} \hat x(s) \\ \hat y(s) \end{pmatrix} = \begin{pmatrix} \frac{1}{s^2+1} + A - 1 \\ \frac 1 s + \frac{1}{s^2+1} - 1 \end{pmatrix}

\implies \qquad \hat x(s) = \dfrac{\det\begin{bmatrix} \frac{1}{s^2+1} + A - 1 & s \\ \frac{1}{s} + \frac{1}{s^2+1} - 1 & s\end{bmatrix} } {\mathrm{det}\begin{bmatrix} s^2 & s \\ s & s \end{bmatrix}} = \frac{sA - 1}{s^2(s-1)}

Having the boundary condition set on x(t), our focus is to invert only the related transform \hat x(s) (because, for the sake of the problem, it’s not relevant determining y(t) ); by doing the parametric partial fraction expansion we have

\frac{sA - 1}{s^2(s-1)} = \frac{c_1}{s-1} + \frac{c_2}{s} + \frac{c_3}{s^2}

where, by solving the linear system, the coefficients are equals to c_1 = A-1, c_2 = 1-A and c_3 = 1. At this point we have everything in order to compute the inverted function of \hat x(s):

x(t) = \mathscr{L}^{-1}\{\hat x(s)\}(t) = (A-1)\mathrm{e}^t + (1-A) + t

The last thing to do now is to evaluate the function x(t) at the time t=1 and equal the result to 1 in order to determine the coefficient A related to the initial first derivative of x:

x(1) = (A-1) e + (1-A) + 1 = 1 \qquad \implies \quad A = 1

Example 8 (boundary value problem) Considering the parametric differential equation problem depending on the parameter A\in\Bbb{R} \begin{cases} x''(t) - x'(t) = t \\ x(0) = 1 \\ x'(0) = A \end{cases}

the solution can be found in the Laplace domain; starting off with the transformation of the problem using the \mathscr L operator we have that

\begin{aligned} \mathscr{L}\{x''(t)\}(s) &= s^2 \hat x(s) - x'(0) - s x(0) \\ \mathscr{L}\{x'(t)\}(s) &= s \hat x(s) - x(0) \\ \mathscr{L}\{t\}(s) &= \frac{1}{s^2} \end{aligned}

\mathscr{L}\{x''(t) - x'(t) = t\}(s) \quad \mapsto \quad s^2 \hat x(s) - A - s -s\hat x(s) +1 = \frac{1}{s^2}

Solving this expression for \hat x(s) and performing the partial fraction expansion on the result (whose computation is here skipped) we obtain

\begin{aligned} \hat x(s) & = \frac{\frac 1 {s^2} + s + A - 1}{s^2-s} = \frac{s^3 + As^2 - s^2 + 1}{s^3(s-1)} \\ & = \frac{A+1}{s-1} - \frac A s - \frac 1 {s^2} - \frac 1 {s^3} \end{aligned}

If we now want to compute the parameter A such that x(1) = - \frac 1 2 we need to invert \hat x(s) in order to have the solution in the time domain. Considering so that

\begin{aligned} \mathscr{L}^{-1}\left\{\frac{1}{s-1}\right\}(t) & = \mathrm{e}^t \\ \mathscr{L}^{-1}\left\{\frac 1 s\right\}(t) &= 1 \\ \mathscr{L}^{-1}\left\{\frac 1 {s^2}\right\}(t) & = t \\ \mathscr{L}^{-1}\left\{ \frac 1 {s^3}\right\}(t) & = \frac {t^2}{2!} \end{aligned}

and so

\begin{aligned} x(t) & = \mathscr{L}^{-1}\left\{\hat x(s)\right\}(t) = (A+1)\mathrm{e}^t - A - t -\frac {t^2}2 \\ x(1) & = (A+1) e - A - 1 - \frac{1}{2} = -\frac{1}{2} \end{aligned}

Solving the equation we retrieve the value of A that’s -1.

Table for the Laplace transform

f(t) \mathscr{L}\{f\}(s)
1 a\,f(t)+b\,g(t) a f(s)+b g(s)
2 f(at) \dfrac{1}{a}\,f\left(\dfrac{s}{a}\right)\qquad [a > 0]
3 1 \dfrac{1}{s}
4 t \dfrac{1}{s^2}
5 t^k \dfrac{k!}{s^{k+1}}
6 t^{n}f(t) (-1)^{n}\dfrac{\mathrm{d}^{n}}{\mathrm{d}s^{n}}f(s)
7 f'(t) s f(s)-f(0^{+})
8 f''(t) s^2 f(s)-f'(0^{+})-sf(0^{+})
9 \dfrac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}f(t) s^{n}{f}(s)-\displaystyle\sum\nolimits\limits_{j=0}^{n-1}s^{n-j-1}f^{(j)}(0^{+})
10 e^{\alpha t}-e^{\beta t} \dfrac{\alpha-\beta}{(s-\alpha)(s-\beta)}
11 e^{at}f(t) f(s-a)
12 f(t-a) e^{-as}f(s)
13 (f\star g)(t) f(s)\,g(s)
14 e^{at}\cos\omega t \dfrac{s-a}{(s-a)^{2}+\omega^{2}}
15 e^{at}\sin\omega t \dfrac{\omega}{(s-a)^{2}+\omega^{2}}
16 e^{at}\cosh\omega t \dfrac{s-a}{(s-a)^{2}-\omega^{2}}
17 a^{bt} \dfrac{1}{s-b\log a}
18 e^{at}\sinh\omega t \dfrac{\omega}{(s-a)^{2}-\omega^{2}}
19 \displaystyle\int_{0}^{t}f(z)\,\mathrm{d}z \dfrac{1}{s}f(s)
20 e^{at}t^{n} \dfrac{n!}{(s-a)^{n+1}}

References

Schiff, Joel. 2005. The Laplace Transform: Theory and Applications. Undergraduate Texts in Mathematics. Springer.

Footnotes

  1. \ \mathrm{e}^{A+\mathrm{i}B} = \mathrm{e}^A \mathrm{e}^{iB} = \mathrm{e}^A(\cos B +\mathrm{i}\sin B).↩︎