Dear students,
I would like to add here some postscript to our discussion of numerical errors introduced by numerical differentiation (through finite difference schemes). While my major focus and message was that there are two sources of error – a truncation errorr and a round-off error caused by finite precision arithmetics – and that there is this V-shape dependence of the error on the discretization interval length with the minimum at \(\sqrt{eps}\), we did not go into more details. But maybe it is worth to mention these details at least here:
Linear dependence of error of a simple FD scheme on the discretization interval length
We had some discussion related to the character of the truncation error for the simplest of finite difference (FD) schemes – a forward difference: whether it is linear or exponential. In fact, showing this is perhaps even instructive. Let's do it at least here:
The Taylor's expansion of a function f at at some given x is
\(f(x+h) = f(x) + f'(x)h + \mathcal{O}(h^2),\)
from which it follows that
\(f(x+h) - f(x) + \mathcal{O}(h^2)= f'(x)h \)
(note that the \(O\) term is sign indifferent), which after dividing both sides by \(h\) gives
\(\frac{f(x+h) - f(x)}{h} + \frac{\mathcal{O}(h^2)}{h}= f'(x),\)
which finally reduces to
\(\frac{f(x+h) - f(x)}{h} + \mathcal{O}(h)= f'(x).\)
In other words, the error decreases linearly with decreasing \(h\).
Will you be able to do the same analysis for, say, central difference scheme?
\(f'(x)\approx \frac{f(x+h) - f(x-h)}{2h}\)
How fast will the truncation error decrease for decreasing h here?
Subtracting two similar numbers is a trouble for floating point arithmetics
I actually forgot to make any comment on why the round-off error (due to floating-point, hence finite-precision arithmetics) actually grows if we decrease the disretization interval length. Do we observe similar phenomenon for other computations? Say, a product of two numbers? The explanation of this phenomenon is that for floating point arithmetics, perhaps the most hostile operation is subtraction of two numbers that are almost equal. And the larger they are, the worse. And that is what we are commiting in computing finite differences, right? We subtract f(x-h) from f(x). For a small interval h, the two operands are nearly equal. But our floating point arithmetics is not able to capture that accurately. And we are making it even worse by dividing this inaccurate result by smaller and smaller number. A pretty serious problem that everyone should be aware of.