The archetypal chaotic dynamical system is the Lorentz system, defined by the differential equation system
where , and are real constants. On the face of it, this looks like a fairly innocuous set of differential equations. However, as is well known, for a wide range of choices for the parameters , and , this system of equations has chaotic orbits, with sensitive dependence on initial conditions and an attractor of fractal dimension.
Determining whether a given dynamical system will exhibit chaotic behaviour is, in general, difficult1. The orbit structure of chaotic systems can be very complex, and it can be difficult to distinguish true chaos from a number of related phenomena.
One approach to characterising the orbit structure of dynamical systems is via Lyapunov exponents, a spectrum of values that measure the stretching and squashing of orbits – “normal” chaotic systems normally have at least one positive Lyapunov exponent and are dissipative, which means that the sum of all of their Lyapunov exponents is negative. While it is relatively straightforward to define Lyapunov exponents, in practice calculating them is tricky. We’ll see how to do this in some practical cases in this and the next couple of articles, following mostly the approach of Rangarajan et al. (1998).
The motivation for this series of articles was basically that I have recently been wondering about how practical it would be to use Haskell for more “traditional” mathematics: there is a big body of work and code relating to category theory and type theory in Haskell, but I’ve not seen so much about matrix algebra, differential equations, dynamical systems and so on. I implemented the Rangarajan method in Mathematica some years ago and thought it might be a good test case. I’m going to gloss over a lot of technical details about the computation of Lyapunov exponents since my main interest is in the Haskell implementation issues.
What are Lyapunov exponents?
Consider a continuous dynamical system
with and with reasonable continuity assumptions for the vector field . Consider a particular trajectory of the system , which we will call the reference trajectory. We can investigate the orbit structure of the system near by considering the deviations of nearby orbits. Writing and linearising about the reference trajectory, we find that
where is the Jacobian matrix of .
We can integrate this linearised equation along the reference trajectory to get the fundamental solution matrix which evolves an initial separation into the separation at time , i.e. . The Lyapunov exponents of the system2 are then the logarithms of the eigenvalues of the matrix
The Lyapunov exponents measure the rate of divergence (or convergence) of nearby solutions to the original system of differential equations.
In order to calculate the Lyapunov exponents for a system, the approach that we will follow is to find an explicit representation for the matrix and its time evolution.
Decomposition of the fundamental solution matrix
The columns of the matrix are solutions to the linearisation of the original differential equation system, meaning that
The key to the approach of Rangarajan et al. is to develop a suitable decomposition of the matrix . We use a QR decomposition, so that we write , with an orthogonal matrix and an upper triangular matrix with positive diagonal entries. The evolution equation for then becomes
Multiplying on the left by and on the right by , we get
The two terms on the left hand side have a special structure that will be helpful: is skew symmetric3, while is still upper triangular4. We’ll call this our “basic equation”, since it’s what we’re going to use to develop a system we can solve for the Lyapunov exponents.
Representation of orthogonal matrices
We now choose an explicit representation for the orthogonal matrix . In dimensions, a rotation matrix is characterised by parameters, which we can choose to be angles representing simple rotations in the planes spanned by coordinates and (with ). A simple rotation in the plane is then represented by the matrix
The full orthogonal matrix is then
Below, we will also label the angles as , in the order the matrices are used in the definition of .
We can use this explicit representation of in terms of the to calculate on the left hand side of the evolution equation for , and on the right hand side. We’ll write as
In general, the entries in the matrix consist of terms involving products of sines and cosines of the plus a single derivative factor. We can extract the lower diagonal entries of , evaluate the sines and cosines to give linear expressions involving the , then solve for the using the corresponding lower diagonal entries from the right hand side matrix .
Putting it all together
We also need a representation for the upper triangular matrix . We’re actually only interested in the on-diagonal elements of this matrix, so we write
The form of the diagonal entries ensures that they are positive (as required), but also handles the issue of exponential scaling arising from the stretching and squashing of phase space volumes – the quantities we’re going to be interested in are actually the , which turn out to be closely related to the Lyapunov exponents – in fact, the th Lyapunov exponent is . The matrix appears in the evolution equation for the fundamental solution matrix in the form
where the are values that we’re not particularly interested in here. We then have what we need for the left hand side of our “basic equation”:
where, again, the are values that we don’t make use of here.
To integrate a system of equations for the and , we proceed as follows:
For the current state , calculate: the right hand side of the original system, ; then matrix (which depends on the system state and on the ); the matrix (which depends on the and for which each entry is a linear function of the ).
Equate the lower triangular entries of to the lower triangular entries of and solve the resulting linear system for the .
Equate the diagonal entries of with the derivatives .
Step the current state forwards using the derivatives .
Let us calculate the Lyapunov exponents of the driven van der Pol oscillator:
where , and are real constants. For numerical calculations, we will use , and . This (nonautonomous) system has chaotic orbits for this choice of parameters.
The gradient of the right hand side vector field for this system is
The system is two-dimensional, so the orthogonal matrix is characterised by a single angle :
and the entries in the matrix are
Inserting explicit expressions for the entries of , collecting the diagonal entries as the time derivatives of the and equating the lower triangular entries to the corresponding entries of the matrix yields the full equations for the evaluation of the Lyapunov exponents as
This ODE system can be integrated using more or less any numerical scheme: here, we use the
RKf45 scheme from the GSL library (via the
Numeric.GSL.ODE module in the
hmatrix Haskell package).
Here are plots of estimates of the Lyapunov exponents of this system generated using this approach:
We see rapid convergence of the estimates: taking the values in the time range [500, 1000], we estimate that the first Lyapunov exponent lies in the range [0.086601, 0.104736] and the second Lyapunov exponent in the range [-6.886599, -6.804369].
We want to implement this scheme for calculating Lyapunov exponents in Haskell. Questions that immediately arise:
How do we represent the input ODE system so that we can easily compute the gradient ? Answer: as Haskell functions – we use automatic differentiation to evaluate the gradient efficiently.
How can we manage the complexity of the matrix for larger systems? Answer: we use a semi-symbolic approach, where we evaluate and simplify the entries of once and use the result to build a matrix that allows us to extract the derivatives.
How do we represent matrices and perform matrix computations? Answer: we use the
The next article in this series will present the code used to implement this scheme in Haskell, along with a slightly more complex example.
There is a slight subtlety here, in that Lyapunov exponents strictly are defined for the trajectory , not for the system as a whole. However, we are generally interested in systems with chaotic dynamics, i.e. systems where trajectories lie in a compact chaotic invariant set. Under these conditions, we can talk about the Lyapunov exponents of the chaotic attractor and, by extension, of the system as a whole.↩
As can be seen by differentiating , the definition of an orthogonal matrix.↩
Both and are upper triangular and thus so is their product.↩