Numerical differentiation#

REFS:

  • Ward; Kincaid; Numerical mathematics and computing

  • Wikipedia

Computing derivatives is the core of many of the numerical work done in computational physics. But it is not a trivial problem to do so with good accuracy. Numerical derivatives are the core computation in finite difference methods, optimization problems, interpolations and many more.

First approach: forward difference#

This is based on either the a Taylor expansion for a given function, or in the actual definition of the derivative (see https://en.wikipedia.org/wiki/Numerical_differentiation)

Definition

Graphical interpretation

$\(f'(x) \simeq \frac{f(x+h) - f(x)}{h} + O(h)\)$

https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Derivative.svg/460px-Derivative.svg.png

where \(h\) is a parameter that ideally goes to zero but in practice that cannot be done due to numerical accuracy. You can also define the backward derivative by just using the previous point. The error estimate can be computed using a Taylor expansion.

A better estimation: Central difference#

If you compute a forward (\(+h\))

\[f(x+h) = f(x) + f'(x) h + f''(x) h^2/2 + f'''(x) h^3/6 + O(h^4) + O(h^5),\]

and a backward (\(-h\)) Taylor expansion,

\[f(x-h) = f(x) - f'(x) h + f''(x) h^2/2 - f'''(x) h^3/6 + O(h^4) - O(h^5),\]

and then you subtract the second from the first (notice that adding then allows you to compute the second order derivative), you get

\[f'(x) \simeq \frac{f(x+h) - f(x-h)}{2h} + O(h^2)\]

which is called the central difference. The order is improved and this version is better than the simple forward difference. See: https://en.wikipedia.org/wiki/Finite_difference

Comparison among methods

Dependence on \(h\)

https://upload.wikimedia.org/wikipedia/commons/thumb/9/90/Finite_difference_method.svg/614px-Finite_difference_method.svg.png

https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/AbsoluteErrorNumericalDifferentiationExample.png/600px-AbsoluteErrorNumericalDifferentiationExample.png

By using Richardson extrapolation, you can improve even more the central difference estimation,

(1)#\[\begin{equation} f'(x) = \phi\left(\frac{h}{2}\right) + \frac{1}{3}\left[\phi\left(\frac{h}{2} \right) - \phi(h) \right] + O(h^4), \end{equation}\]

where \(\phi(h)\) is the derivative estimation from the central difference with a given \(h\) value. NOTE: This form is only for the central difference, since it depends on the algorithm order.

Other important cases#

  • Interpolating data and then estimating the derivative

  • Derivative of noisy data: first compute binned averages and then derive.

  • Higher order derivatives

  • N-dimensional derivatives.

Implementing forward and central derivatives#

Implement functions to compute both the forward and central derivatives. Explore the following case: Compute the derivative as a function of \(x\), with fixed \(h = 0.01\). Use \(f(x) = \sin(x^2)\). Compare with the exact derivative. Plot the relative difference.

The prototypes might be

double forward_diff(double x, double h);
double central_diff(double x, double h);

File modularization#

Now try to modularize the code. Create headers and implementation files called derivatives.h and derivatives.cpp, where you are going to put declarations and implementations for both the forward and the central derivative methods. What happens now? where should the function to be derived go? We need a more generic approach. Use using fptr = double(double) as a function pointer, which is a construct that represents what a function returns and receives. The ideal is to use the functions as parameters

using fptr = double(double); // pointer(c++11) to a function which receives a double and returns a double
double forward_diff(double x, double h, fptr fun);
double central_diff(double x, double h, fptr fun);

We can even go further and try to use C++ templates, so we will have

template <class funptr>
double forward_diff(double x, double h, funptr fun);
template <class funptr>
double central_diff(double x, double h, funptr fun);

Or even use functors or std::function. For now let’s just keep this simple with using.

Error a as function of \(h\)#

Now fix \( = 1.454335\), and compute (using another main function) the relative error as a function of \(h \in [10^{-16}, 10^{-15}, 10^{-14}, \ldots, 10^{-3}, 10^{-2}, 10^{-1}]\). Plot it. Explain the differences among forward and central results.

Error as a function of both \(h\) and \(x\)#

Is the error dependent on \(x\) also? explore variations for both \(x\) and \(y\) and plot in a 3d picture.

Implementing Richardson extrapolation#

Now, use the general expression for Richardson extrapolation applied to an algorithm of order \(O(h^\alpha)\), and also plot its relative differences for fixed \(x\), (see https://en.wikipedia.org/wiki/Richardson_extrapolation?useskin=vector)

(2)#\[\begin{equation} I \simeq \dfrac{t^\alpha I(\frac{h}{t}) - I(h)}{t^\alpha - 1}, \end{equation}\]

where \(t\) is a factor larger than 1 (tipically \(t=2\)), \(\alpha\) is the algorithm order, and \(I\) is the value you are computing. This implies additions to both headers and implementation files, and a new main.

As you can see, the Richardson implementation is very similar for both the central and the forward algorithms. Maybe there is some hope to decrease this duplication. Think about a new using ... construct. Implement it and plot all errors as a function of \(h\)

Testing#

Can you of some test to write for your derivatives? Implement some