Riassuntone per la prova scritta
Appunti di calcolo numerico
Per l’esame di Calcolo Numerico
Metodi iterativi per sistemi lineari
Schema generale \bm{P}\bm{x}_{k+1} = \bm{b}+\bm{Q}\bm{x}_k
nota: mai invertire \bm{P}.
Jacobi | Gauss-Seidel | SOR | |
---|---|---|---|
\boldsymbol{P} | \boldsymbol{D} | \boldsymbol{D}-\boldsymbol{L} | \dfrac{\boldsymbol{D}}{\omega}-\boldsymbol{L} |
\boldsymbol{Q} | \boldsymbol{L}+\boldsymbol{U} | \boldsymbol{U} | \dfrac{1-\omega}{\omega}\boldsymbol{D}+\boldsymbol{U} |
\boldsymbol{I}-\boldsymbol{P}^{-1}\boldsymbol{A} | \boldsymbol{D}^{-1}(\boldsymbol{L}+\boldsymbol{U}) | (\boldsymbol{D}-\boldsymbol{L})^{-1}\boldsymbol{U} | \left(\dfrac{\boldsymbol{D}}{\omega}-\boldsymbol{L}\right)^{-1}\left(\dfrac{1-\omega}{\omega}\boldsymbol{D}+\boldsymbol{U}\right) |
Per il calcolo del raggio spettrale evitando l’inversione della matrice \bm{P}
0=\underbrace{|\bm{P}^{-1}\bm{Q}-\lambda\bm{I}|}_{p(\lambda)} =\underbrace{|\bm{P}^{-1}|}_{\neq 0}\underbrace{|\bm{Q}-\lambda\bm{P}|}_{q(\lambda)}
Zeri di funzione
Newton secanti Halley
\begin{aligned} x_{k+1} &= x_k - \frac{f(x_k)}{f'(x_k)},\\[1em] x_{k+1} &= x_k - \frac{(x_k-x_{k-1})f(x_k)}{f(x_k)-f(x_{k-1})},\\[1em] x_{k+1} &= x_k - \frac{f(x_k)f'(x_k)}{f'(x_k)^2-\frac{1}{2}f(x_k)f''(x_k)}, \end{aligned}
Verifica numerica dell’ordine con radice \alpha nota
\boxed{ p \approx\dfrac{\log\dfrac{\left\lvert x_{k+1}-\alpha\right\rvert}{\left\lvert x_{k+2}-\alpha\right\rvert} }{\log\dfrac{\left\lvert x_{k}-\alpha\right\rvert}{\left\lvert x_{k+1}-\alpha\right\rvert}}, }
Interpolazione polinomiale
Errore di interpolazione
f(x) - p(x) = \frac{f^{(n+1)}(\zeta)}{(n+1)!}(x-x_0)(x-x_1)\cdots(x-x_n)
con \zeta\in I(x,x_0,x_1,\ldots,x_n)
Interpolazione di Lagrange
L_{i}(x) = \frac{\ell_{i}(x)}{\ell_{i}(x_i)}, \qquad \ell_{i}(x) = \prod_{{j=0}\atop{j\neq i}}^n (x-x_j)
p(x) = y_0 L_0(x) + y_1 L_1(x)+\cdots+y_n L_n(x)
Interpolazione di Newton
\begin{aligned} p(x) & = f[x_0]\omega_0(x) \\ & + f[x_0,x_1] \omega_1(x) \\ & + f[x_0,x_1,x_2] \omega_2(x) \\ & \vdots \\ & + f[x_0,x_1,x_2,\ldots,x_n] \omega_n(x) \end{aligned}
base polinomiale per il metodo di Newton
\begin{aligned} \omega_0(x) & = 1 \\ \omega_1(x) & = x-x_0 \\ \omega_2(x) & = (x-x_0)(x-x_1) = \omega_1(x)(x-x_1) \\ & \vdots \\ \omega_n(x) & = (x-x_0)(x-x_1)\cdots (x-x_{n-1}) = \omega_{n-1}(x)(x-x_{n-1}) \end{aligned}
Differenze divise
f[a,\bullet,b] = \frac{f[a,\bullet]-f[\bullet,b]}{a-b}
Integrazione numerica
Trapezi
\int_a^b f(x) dx = \underbrace{\frac{h}{2}\left[f(a) + f(b) + 2\sum_{k=1}^{n-1} f(x_i) \right]}_{\textrm{formula quadratura}} \quad \underbrace{-\frac{b-a}{12}h^2 f''(\zeta)}_{\textrm{termine di errore}}
Simpson
\int_a^b f(x) dx = \frac{h}{3}\left[ f(x_0) + 4f(x_1) + 2f(x_2)+4f(x_3) + \cdots + 4 f(x_{n-1}) + f(x_n) \right]-\frac{b-a}{180}h^4 f^{(4)}(\zeta)
Errore integrazione per intervallo [-1,1]
\begin{aligned} \mathrm{Trapezi: } \quad c & = -\frac{2}{3} g''(z)\\[1em] \mathrm{Simpson: } \quad c & = -\frac{1}{90} g''''(z)\\[1em] {\mathrm{Gauss}\atop\mathrm{Legendre}}\quad c & = \frac{2^{2m+1}(m!)^4}{(2m+1)((2m)!)^3} g^{(2m)}(z)\quad \end{aligned}
Trasformazione per l’intervallo [a,b]:
N \geq \left(\frac{b-a}{2}\right)\left(\frac{|c|(b-a)}{2\epsilon}M_p\right)^{1/p}
Runge-Kutta
y_{k+1} = y_k + \sum_{i=1}^s b_i K_i, \qquad K_i = h f\Big( t_k+c_i h, y_k + \sum_{j=1}^s a_{ij} K_j \Big), \qquad \def\arraystretch{1.5} \begin{array}{c|c} \bm{c} & \bm{A} \\ \hline & \bm{b}^T \end{array}
\large \color{magenta} \left. \begin{aligned} &\color{DarkGreen}\underbrace{ \color{DarkRed}\overbrace{\color{blue}\overbrace{ \underbrace{\color{blue}\bm{A}\mathbf{1}=\bm{c}}_{\mathrm{consistenza}},\quad \color{blue}\bm{b}\cdot\mathbf{1}=1, }^{\mathrm{ordine 1}}\quad \color{DarkRed}\bm{b}\cdot\bm{c} = \frac{1}{2}}^{\mathrm{ordine 2}},\quad \color{DarkGreen}\bm{b}\cdot\bm{c}^2 = \frac{1}{3},\quad \bm{b}\cdot\bm{d} = \frac{1}{6} }_{\mathrm{ordine 3}} \\ & \color{magenta}\bm{b}\cdot\bm{c}^3 = \frac{1}{4},\quad (\bm{b}\otimes\bm{c})\cdot\bm{d} = \frac{1}{8},\quad \bm{e}\cdot\bm{c}^2 = \frac{1}{12},\quad \bm{e}\cdot\bm{d} = \frac{1}{24} \quad \end{aligned} \right\} _{\color{magenta}\mathrm{ordine 4}}
\bm{d} = \bm{A}\bm{c}, \qquad \bm{e} = \bm{A}^T\bm{b}, \qquad \bm{c}^2 = \bm{c}\otimes\bm{c},\qquad \bm{c}^3 = \bm{c}\otimes\bm{c}\otimes\bm{c}
Ordine convergenza
Errore asintotico
- y(x) soluzione analitica
- y_k soluzione numerica in x_k
E_h = \max_{k=0}^n |y_k-y(x_k)| \approx C h^p.
stima dell’errore
\boxed{ \quad p \approx \dfrac{\log\dfrac{E_{h_1}}{E_{h_2}}} {\log\dfrac{h_1}{h_2}},\quad }
Fattorizzazione LU
Matrice di Frobenius
\begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} \quad \implies \quad \bm{F}_1 = \bm{I} - \begin{pmatrix} 0 \\ \bm{v}_1 \end{pmatrix} \bm{e}_1^\top \qquad \bm{v}_1= \begin{pmatrix} a_2/a_1 \\ \vdots \\ a_n/a_1 \end{pmatrix}
per cui
\bm{F}_1 \begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} = \begin{pmatrix} a_1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}
ed
\begin{pmatrix} a_1 \\ \vdots\\ a_k \\ a_{k+1} \\ \vdots \\ a_n \end{pmatrix} \quad \implies \quad \bm{F}_k = \bm{I} - \begin{pmatrix} 0 \\ \vdots\\ 0 \\ \bm{v}_k \end{pmatrix} \bm{e}_k^\top \qquad \bm{v}_k= \begin{pmatrix} a_{k+1}/a_k \\ \vdots \\ a_n/a_k \end{pmatrix}
per cui
\bm{F}_k \begin{pmatrix} a_1 \\ \vdots\\ a_k \\ a_{k+1} \\ \vdots \\ a_n \end{pmatrix} = \begin{pmatrix} a_1 \\ \vdots\\ a_k \\ 0 \\ \vdots \\ 0 \end{pmatrix}
inoltre
\bm{L} = \begin{pmatrix} 1 & 0 & \cdots & & 0 \\ & 1 & & & \vdots \\ & & \ddots & & \vdots \\ & & & 1 & \vdots \\ \bm{v}_1 & \bm{v}_2 & & \bm{v}_{n-1} & 1 \\ \end{pmatrix}
Metodo QR
Rotazione di Givens
\begin{pmatrix} a \\ b \end{pmatrix} \quad \implies \quad \begin{pmatrix} \sqrt{a^2+b^2} \\ 0 \end{pmatrix}
\bm{G} = \begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{pmatrix} \qquad \cos\theta = \dfrac{a}{\sqrt{a^2+b^2}},\quad \sin\theta = \dfrac{b}{\sqrt{a^2+b^2}}
Riflessione di Househölder
\bm{x}= \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} \quad \implies \quad \begin{pmatrix} \|\bm{x}\|_2 \\ 0 \\ \vdots \\ 0 \end{pmatrix}
\bm{H} = \bm{I} -2\dfrac{\bm{v}\bm{v}^\top}{\bm{v}\cdot\bm{v}}, \qquad \bm{v} = \bm{x}+s\|\bm{x}\|_2 \bm{e}_1 = \begin{pmatrix} x_1+s\|\bm{x}\|_2\\ x_2 \\ \vdots \\ x_n \end{pmatrix} \qquad s = \mathrm{sign}(x_1)