Section 5.3.3: Quadratic Approximation

Methods in section 5.3.2 involved function comparison. No use was made of function values. Compare solving 1 NLAE with interval halving.

Can we also use function values?

Compare quadratic approximation with golden section, the most efficient `dumb' method.

Three points are known: boundary points and one interior point of the search interval. Interior point should be minimum. Points are not equally spaced (in general).

The task: fit a quadratic function

q(x) = A x2 + B x + C

to the values (x1,f1), (x2,f2), (x3,f3).

See section 5.3.3.1.

Connection with bounded secant

Use of quadratic approximation is analogous to a bounded secant search when solving 1 NLAE: the Regula-Falsi method.

In fact it is Regula-Falsi on f'(x) = 0, where f'(x) is estimated from the function values using central differences:

\begin{displaymath}f^{'}( (x_1 + x_2)/2 ) \approx \frac{f_2 - f_1}{x_2 - x_1} \equiv s_{21} \end{displaymath}

etc.

When the minimum point xm of the quadratic is found, evaluate f(xm).

Note: $f(x_m) \neq q(x_m)$, the value of the quadratic.
The quadratic approximation is used only to find the position (in x) for the next function evaluation.

As long as the search range has the interior point as minimum then xm lies within the search range (indeed between the mid points of the sub-intervals).

For next iteration, keep the two sub-intervals adjacent to the minimum.

However, if the minimum is constrained at a boundary of the feasible region this may not be so, and the minimum of the quadratic may lie outside the search interval. In this case, do not use quadratic approximation but revert to golden section and see whether the minimum is indeed precisely at the boundary.

The method can get stuck

Pipe diameter optimisation : D = 0.5, 1.1, 1.7m starting points. 25 iterations to converge! (Results)

Much slower than Golden section, like Regula-Falsi on a highly non-linear f(x) = 0.

There is a case for combining quadratic fitting with golden section, to keep the rate of interval reduction satisfactory.

Line search in multivariate optimisation

Line search (= univariate optimisation or improvement) is a subproblem in most practical multivariate optimisation methods.
Line search along direction ${\bf d}$ starting from point ${\bf x_0}$:

\begin{displaymath}{\bf x} = {\bf x_0} + \alpha {\bf d} \end{displaymath}

Univariate search in $\alpha$.

But most methods do inexact line search not exact line search - `sufficient decrease' not rigorous minimisation.



Return to Section 5.3 Index
Return to Section 5 Index

Course Organiser
Last modified: Wed Aug 26 10:13:43 BST