Recently I’ve been working on “hindcasting” weather events with a climate model. A hindcast is just a forecast, but you do it after the event has happened. Many climate models have issues in how they represent the relationship between convection and environmental moisture. It is difficult to compare the variability in a climate simulation and observations, because the long-term averages tend to be slightly different in each case in such a way that affects the variability itself. So, the idea behind these hindcasts is to compare a climate model to observations in the short period after model initialization, when they are much closer to each other.
A common problem in this type of work is that the models quickly “drift” away from the initial state. This drift is sometimes referred to as “model error”. Up till now, I’ve been thinking about model error as a problem with the model that needs to be corrected. However, this new paper by Judd et al. (2008) has changed my thinking on this issue.
The crux of this paper is that instead of thinking about “improving” the model and making it more like reality, it’s important to recognize that all weather and climate models are “imperfect” models (which is actually a technical term), and also that they have their own attracting states. The real atmosphere has a different attracting state, and so when we start a model based on the real atmosphere the model is strongly pulled back to its own attractor. I find this field of “dynamical systems” fascinating, which is why I did my undergraduate thesis on it!
This Judd et al. (2008) paper introduced me to the idea of a “shadow analysis”. The word “analysis” in this context refers to a approximation of the true state of the atmosphere. In other words, raw observations are used to create an “analysis” that can initialize a weather model. However, an analysis does not generally lie near an attractor of the weather model. A “shadow analysis” is thus a geometric projection of the analysis onto the attractor!
The figure below nicely illustrates the difference between the “analysis” and the “shadow analysis”.
Notice that the shadow analysis lies on the “attracting manifold”, which is high-dimensional topological object that contains the states that a system gravitate towards. There’s probably a better definition out there, but just think of the “wings” of the lorenz butterfly:
Let’s take a step back…. this stuff is very hard to think about. We’re talking about geometry in an n-dimensional space that will forever be impossible for our feeble minds to visualize. Honestly, I had a hard time with the lingo in this paper, and I’ll need to read it a few more times. However, the paper provides some very nice real-world examples to show how these ideas come into play!
The plot below is very difficult to explain, but I’m going to try my best. Both lines represent the vorticity of a forecast by a certain weather model. The darker line is a forecast initialized from the shadow analysis, while the lighter line is a forecast initialized from a typical analysis. As the forecast evolves we can see that the lighter line merges onto the path of the darker line. This shows that the model does indeed have an attracting manifold, and there is a large error in the initial part of the forecast due to model being pulled toward the attractor.
This work is dense, but I think these are valuable ideas that could help improve weather forecasting. If nothing else, it gives us a new vocabulary for understanding forecast errors.
As a side note, the first author, Kevin Judd, seems like an interesting guy. Of his many academic ventures, he developed an online tool for learning about calculus, statistics and linear algebra, called CalMæth.