Singular pertubation problems
Singular perturbation problems
A singular perturbation problem (as opposed to a regular perturbation problem) is a problem containing a small parameter that cannot be approximated uniformly by setting the small parameter to zero. That is, a solution cannot be uniformly approximated by an asymptotic expansion
For algebraic equations, this often exhibits itself as the losing of roots. That is, a regular perturbation method will find fewer roots than expected.
Huh? Where the heck did my missing root go?
We’ve seen in lectures that trying a regular perturbation technique to approximate solutions of singularly perturbed problems causes roots to be lost. But why?
As an example, we considered in the lectures the singularly perturbed quadratic equation
By taking our usual approach (regular perturbation) we neglect the term \(\varepsilon u^2\) when looking at the limit case. This is wrong, because for one of the roots, \(u^2\) gets mahoosive (not a technical term!) as \(\varepsilon\to 0\), so despite \(\varepsilon\) being small, \(\varepsilon u^2\) is not.
What can be done to find the missing roots?
As the regular perturbation ansatz (i.e. approximating roots \(x\) as \(x_0+\varepsilon x_1+\varepsilon^2 x_2+\ldots\)) did not do the job, we need a new one. Whatever we choose as our ansatz (i.e. the assumed form in which we seek to approximate roots) should mimic the bad behaviour of the roots. That is, it should blow up (diverge) as \(\varepsilon\to 0\). Consequently we use the singular perturbation ansatz
where \(\alpha>0\) is some constant we should choose, depending on the problem.
The method for choosing \(\alpha\) is called the method of dominant balance. The idea is to choose \(\alpha\) in such a way that two (or more) of the terms in the equation are of the same order, while the other terms are of a higher (less negative) order. We have seen many examples in the lecture notes of how to do this: we first tried trial and error before arriving at a more systematic approach, which is to construct a Kruskal–Newton graph.
Kruskal–Newton graphs and using their results
These are used to determine which value or values of \(\alpha\) should be used to find the singularly perturbed roots (i.e. those roots that a regular perturbation approach misses). The process is to identify the values \(p,q\) for which the coefficients \(C_{p,q}\) are non-zero when the algebraic equation is written in the form
Plot these points \((p,q)\) and then find all possible placements of a line going through two or more points with all other points lying above the line.
The slopes of the resulting lines give the values of \(-\alpha\) (note the minus sign) to use in the singular perturbation ansatz \(x=\varepsilon^\alpha z(\varepsilon)\). You should always check that the power to which \(\varepsilon\) is raised is negative in any singular perturbation ansatz.
Finally, substitute the ansatz or ansatzes, one at a time, into the original equation and then use regular perturbation methods on \(z\). That is, approximate \(z\) in the form