# Non-diffusive atmospheric flow #12: dynamics warm-up

The analysis of preferred flow regimes in the previous article is all very well, and in its way quite illuminating, but it was an entirely static analysis–we didn’t make any use of the fact that the original $Z_{500}$ data we used was a time series, so we couldn’t gain any information about transitions between different states of atmospheric flow. We’ll attempt to remedy that situation now.

What sort of approach can we use to look at the dynamics of changes in patterns of $Z_{500}$? Our $(\theta, \phi)$ parameterisation of flow patterns seems like a good start, but we need some way to model transitions between different flow states, i.e. between different points on the $(\theta, \phi)$ sphere. Each of our original $Z_{500}$ maps corresponds to a point on this sphere, so we might hope that we can some up with a way of looking at trajectories of points in $(\theta, \phi)$ space that will give us some insight into the dynamics of atmospheric flow.

Since atmospheric flow clearly has some stochastic element to it, a natural approach to take is to try to use some sort of Markov process to model transitions between flow states. Let me give a very quick overview of how we’re going to do this before getting into the details. In brief, we partition our $(\theta, \phi)$ phase space into $P$ components, assign each $Z_{500}$ pattern in our time series to a component of the partition, then count transitions between partition components. In this way, we can construct a matrix $M$ with

$$M_{ij} = \frac{N_{i \to j}}{N_{\mathrm{tot}}}$$ where $N_{i \to j}$ is the number of transitions from partition $i$ to partition $j$ and $N_{\mathrm{tot}}$ is the total number of transitions. We can then use this Markov matrix to answer some questions about the type of dynamics that we have in our data– splitting the Markov matrix into its symmetric and antisymmetric components allows us to respectively look at diffusive (or irreversible) and non-diffusive (or conservative) dynamics.

Before trying to apply these ideas to our $Z_{500}$ data, we’ll look (in the next article) at a very simple Markov matrix calculation by hand to get some understanding of what these concepts really mean. Before that though, we need to take a look at the temporal structure of the $Z_{500}$ data–in particular, if we’re going to model transitions between flow states by a Markov process, we really want uncorrelated samples from the flow, and our daily $Z_{500}$ data is clearly correlated, so we need to do something about that.

## Autocorrelation properties

Let’s look at the autocorrelation properties of the PCA projected component time series from our original $Z_{500}$ data. We use the autocorrelation function in the statistics package to calculate and save the autocorrelation for these PCA projected time series. There is one slight wrinkle–because we have multiple winters of data, we want to calculate autocorrelation functions for each winter and average them. We do not want to treat all the data as a single continuous time series, because if we do we’ll be treating the jump from the end of one winter to the beginning of the next as “just another day”, which would be quite wrong. We’ll need to pay attention to this point when we calculate Markov transition matrices too. Here’s the code to calculate the autocorrelation:

npcs, nday, nyear :: Int
npcs = 10
nday = 151
nyear = 66

main :: IO ()
main = do
-- Open projected points data file for input.
Right innc <- openFile $workdir </> "z500-pca.nc" let Just ntime = ncDimLength <$> ncDim innc "time"
let (Just projvar) = ncVar innc "proj"
Right (HMatrix projsin) <-
getA innc projvar [0, 0] [ntime, npcs] :: HMatrixRet CDouble

-- Split projections into one-year segments.
let projsconv = cmap realToFrac projsin :: Matrix Double
lens = replicate nyear nday
projs = map (takesV lens) $toColumns projsconv -- Calculate autocorrelation for one-year segment and average. let vsums :: [Vector Double] -> Vector Double vsums = foldl1 (SV.zipWith (+)) fst3 (x, _, _) = x doone :: [Vector Double] -> Vector Double doone ps = SV.map (/ (fromIntegral nyear))$
vsums $map (fst3 . autocorrelation) ps autocorrs = fromColumns$ map doone projs

-- Generate output file.
let outpcdim = NcDim "pc" npcs False
outpcvar = NcVar "pc" NcInt [outpcdim] M.empty
outlagdim = NcDim "lag" (nday - 1) False
outlagvar = NcVar "lag" NcInt [outlagdim] M.empty
outautovar = NcVar "autocorr" NcDouble [outpcdim, outlagdim] M.empty
outncinfo =
emptyNcInfo (workdir </> "autocorrelation.nc") #
flip (withCreateFile outncinfo) (putStrLn . ("ERROR: " ++) . show) $\outnc -> do -- Write coordinate variable values. put outnc outpcvar$