Elementary numerical solution to ODEs and SDEs Part I

Instructions:

Use the left and right arrow keys to navigate the presentation forward and backward respectively. You can also use the arrows at the bottom right of the screen to navigate with a mouse.

FAIR USE ACT DISCLAIMER:
This site is for educational purposes only. This website may contain copyrighted material, the use of which has not been specifically authorized by the copyright holders. The material is made available on this website as a way to advance teaching, and copyright-protected materials are used to the extent necessary to make this class function in a distance learning environment. The Fair Use Copyright Disclaimer is under section 107 of the Copyright Act of 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research.

Outline

  • The following topics will be covered in this lecture:
    • The Euler scheme for ODEs
    • Convergence and error of the Euler scheme
    • The Euler-Maruyama scheme for SDEs
    • Strong versus weak convergence for numerical SDEs

The Euler scheme for ODEs

  • In the last section, we considered the construction of a Gauss-Markov model in continuous time as generated from SDEs.

  • Although we focused on the scalar case, these concepts generalize naturally to systems of stochastic differential equations on random vectors.

  • Rather than belaboring the mathematical machinery, in the next two lectures we will consider how one simulates such systems of equations in practice.

  • We will begin again with the analogy of deterministic systems of ordinary differential equations (ODEs).

  • After we introduce some core techniques for ODEs, we will introduce how these methods are extended to systems that are simulated with governing laws with random shocks.

The Euler scheme for ODEs

  • Recall the notion of the initial value problem, where

    \[ \begin{align} \frac{\mathrm{d}}{\mathrm{d}t}\pmb{x}&: = \pmb{f}(t, \pmb{x}), \\ \pmb{x}(0)&:= \pmb{x}_0. \end{align} \]

  • We note that the Lipshitz condition discussed last time guarantees the existence and uniqueness of a solution.

  • Suppose that the solution is defined over an interval \( [0, T] \) where we make a partition as

    \[ \begin{align} t_0 = 0 < t_1 < \cdots < t_n = T \end{align} \] where \( t_i - t_{i-1} = h \) for all \( i \).

  • Let's consider once again a tangent approximation to this solution, where for the perturbation \( t_i + h \) in time,

    \[ \begin{align} \pmb{x}(t_{i+1})&= \pmb{x}(t_{i}) + \pmb{x}'(t_i) h +\mathcal{O}(h^2) \\ &=\pmb{x}(t_{i}) + f(t_i, \pmb{x}(t_i)) h +\mathcal{O}(h^2). \end{align} \]

Euler scheme
Let \( \pmb{x}(t) \) satisfy the initial value problem above; the approxmation of this solution by Euler’s scheme is defined recursively as \[ \begin{align} \hat{\pmb{x}}(t_{i+1}) &=\hat{\pmb{x}}(t_{i}) + f(t_i, \hat{\pmb{x}}(t_i)) h, \end{align} \] where \( h \) is known as the step size in time.

Convergence and error of the Euler scheme

  • We note that Taylor's theorem guarantees that the difference of the approximation

    \[ \begin{align} \pmb{x}(t_{i+1})&=\pmb{x}(t_{i}) + f(t_i, \pmb{x}(t_i)) h +\mathcal{O}(h^2) \end{align} \]
    from the true solution shrinks at order \( \mathcal{O}(h^2) \) as \( h\rightarrow 0 \).

  • If \( \pmb{f} \) has continuous second derivatives, we can actually write

    \[ \begin{align} \pmb{x}(t_{i+1})= &\pmb{x}(t_{i}) + \pmb{x}'(t_i) h + \frac{1}{2}\pmb{x}''(\tau_i) h^2 \end{align} \] for some \( \tau_i \in (t_i, t_{i+1}) \).

    • The term \( \frac{1}{2}\pmb{x}''(\tau_i) h^2 \) is known as the truncation error of the Taylor approximation.
  • However, the repeated approximation using this rule accumulates error, due to the repeated miss-match between the numerical approximation and the true solution.

  • Consider, for the numerical solution defined again as \( \hat{\pmb{x}}(t) \),

    \[ \begin{align} \pmb{x}(t_{i+1}) - \hat{\pmb{x}}(t_{i+1}) = \pmb{x}(t_{i}) - \hat{\pmb{x}}(t_{i}) + h\left[\pmb{f}(t_{i}, \pmb{x}(t_i)) - \pmb{f}(t_i, \hat{\pmb{x}}(t_i) \right]+ \frac{1}{2}\pmb{x}''(\tau_i) h^2 \end{align} \]

  • The terms \( \pmb{x}(t_{i}) - \hat{\pmb{x}}(t_{i}) + h\left[\pmb{f}(t_{i}, \pmb{x}(t_i)) - \pmb{f}(t_i, \hat{\pmb{x}}(t_i) \right] \) in the above difference are instead known as the propagation error.

Convergence and error of the Euler scheme

  • Although the exact Taylor expansion means that the derivative approximation differs by terms at second order,

    • the propagation error from the repeated numerical approximation of the true state means that the numerical and the true solution differ at order \( \mathcal{O}(h) \).
  • This motivates the following definition for the numerical solution of ODEs.

Numerical convergence of ODEs
Let \( \pmb{x}(t) \) be an exact solution to an initial value problem, and let \( \hat{\pmb{x}}(t) \) be a numerical approximation, using a maximum step size of \( h \). The approximation \( \hat{\pmb{x}} \) is said to converge to \( \pmb{x} \) at order \( \gamma \) if there exists a constant \( C \) independent of \( h \) such that \[ \begin{align} \max \parallel \pmb{x}(t) - \hat{\pmb{x}}(t) \parallel \leq C h^\gamma \end{align} \] for all \( t \in [0, T] \).
  • With the above definition in mind, it is a classic result that the Euler approximation converges at order \( 1.0 \) to the true solution to an initial value problem.

  • The Euler scheme is not used widely in practice due to this low order of convergence, but it is extremely useful to understand the principles of numerical simulation.

  • Particularly, higher-order Taylor expansions give better (higher-order) rates of convergence of the numerical solution to the true solution.

  • The Euler scheme also has a direct extension to SDEs that will likewise help explain the approximations made in numerical simulation of these random sample paths.

The Euler Maruyama scheme

  • We recall the form of our generic SDE, but this time presented in vector notations,

    \[ \begin{align} \mathrm{d}\pmb{x}:= \pmb{f}(t, \pmb{x}(t)) \mathrm{d}t + \mathbf{S}(t, \pmb{x}(t))\mathrm{d}\pmb{W}_t \end{align} \] where in the above:

    1. The function \( \pmb{f} \) is a (possibly) nonlinear function representing the governing laws;
    2. \( \mathbf{S} \) is a matrix-valued function of \( \pmb{x} \) and \( t \);
    3. \( \pmb{W}_t \) is a vector of independent Wiener processes.
  • The extension of Euler's scheme follows directly to the above by formally writing the following.

Euler-Maruyama
For the system of SDEs defined above, the numerical approximation of the sample path by Euler-Maruyama is defined as, \[ \begin{align} \hat{\pmb{x}}(t_{i+1}) := \hat{\pmb{x}}(t_{i}) + \pmb{f}(t_i, \hat{\pmb{x}}(t_i)) h + \mathbf{S}(t_i, \hat{\pmb{x}}) \boldsymbol{\xi} \sqrt{h} \end{align} \] where \( \boldsymbol{\xi} \sim N\left(\pmb{0}, \mathbf{I}_{N_x}\right) \).
  • Note that with \( \boldsymbol{\xi} \) distributed as the standard normal means that \( \sqrt{h}\boldsymbol{\xi} \sim N\left(\pmb{0}, h \mathbf{I}_{N_x}\right) \), giving a representation of the discretized Wiener process.

  • Intuitively, this gives the direct extension of the ODE system by analogy where we make a Gaussian perturbation to the mechanistic evolution of the state by the process laws.

  • However, as usual, convergence is a more subtle concept…

Strong versus weak convergence for numerical SDEs

  • As with the earlier discussion of strong convergence, we understand this as being a “path-wise” convergence similar to the deterministic convergence.
Strong convergence
Let \( \pmb{x}(t) \) be an exact sample path of a generic system of SDEs, and \( \hat{\pmb{x}}(t) \) be some numerical discretization with a maximum step size of \( h \) between time points. The approximation \( \hat{\pmb{x}} \) is said to strongly converge to \( \pmb{x} \) at order \( \gamma \) if there exists a constant \( C \) independent of \( h \) such that \[ \begin{align} \mathbb{E}\left[\parallel \pmb{x}(t) - \hat{\pmb{x}}(t) \parallel \right] \leq C h^\gamma \end{align} \] for all \( t \in [0, T] \).
  • Notice that this gives a direct analogy to the deterministic convergence;

    • i.e., averaging out over all possible outcomes for the Wiener process, the expected discrepancy between the exact realization and the approximate realization is bounded at order \( \mathcal{O}(h^\gamma) \).
  • For a specific sample path, given a particular realization of \( \pmb{W}_t \), the discrepancy may be less than or greater than this bound;

    • however, this says intuitively that the average discretization error over all possible realizations remains bounded.
  • This is contrasted with weak convergence which describes a convergence in probability.

    • Particularly, we can consider weak convergence not to guarantee convergence to any sample path, but to guarantee the reconstruction of the statistics of their distribution.

Strong versus weak convergence for numerical SDEs

Weak convergence
Let \( \pmb{x}(t) \) be an exact sample path of a generic system of SDEs, and \( \hat{\pmb{x}}(t) \) be some numerical discretization with a maximum step size of \( h \) between time points. The approximation \( \hat{\pmb{x}} \) is said to weakly converge to \( \pmb{x} \) at order \( \gamma \) if there exists a constant \( C \) independent of \( h \) such that, for any \( 2(\gamma+1) \) continuously differentiable function \( g \) of at most polynomial growth \[ \begin{align} \parallel\mathbb{E}\left[ \pmb{g}(\pmb{x}(t)) - \pmb{g}(\hat{\pmb{x}}(t)) \right]\parallel \leq C h^\gamma \end{align} \] for all \( t \in [0, T] \).
  • In the simple case where \( \pmb{g} \) is the identity, we see a clear distinction between the two modes of convergence:

    • Strong convergence bounds the mean error between the sample path and the approximation; while
    • weak convergence bounds the error in reconstructing the mean for the sample paths.
  • Therefore, we say that weak convergence measures the accuracy of the empirical statistics generated from an ensemble of approximate path solutions, computing statistics from these realizations.

  • Strong convergence guarantees weak convergence with at least the same order;

    • however, it is possible that a numerical scheme will converge weakly alone, and not re-produce any particular sample path, only the statistics of the distribution.

Strong versus weak convergence for numerical SDEs

  • A remarkable fact arises then which differentiates numerical ODEs and SDEs;

    • particularly, the Euler-Maruyama scheme generally only has a strong convergence on order of \( \gamma = 0.5 \) while it has weak convergence on order of \( \gamma = 1.0 \).
  • The loss of a half order of convergence arises due to the difference between the deterministic Taylor expansion and the Itô-Taylor expansion:

    \[ \begin{align} f(W_T) - f(W_0) = \int_{0}^T f'(W_t)\mathrm{d}W_t +\frac{1}{2} \int_{0}^T f''(W_t) \mathrm{d}t. \end{align} \]

  • When one corrects for the miss-match in the Itô-Taylor expansion including additional terms, one arives at the Milstein scheme.

  • It is important to note, however, that when the SDE is in terms of additive noise, i.e., \( \mathbf{S}(t, \pmb{x}(t)) := \mathbf{S}(t) \), the correction vanishes and the Euler-Maruyama scheme gains strong convergence order \( 1.0 \).

  • Nonetheless, the very poor approximation by Euler and Euler-Maruyama leads to the need to derive schemes with better convergence properties.

  • In the next lecture, we will consider the development of the widely used 4-stage Runge-Kutta scheme and its extension to simulating SDEs.

  • We will also consider some general practicalities about simulating discrete models, and tangent-linear models, from continuous-time models.