The 1970s and 80s saw the birth of a new science, “chaos theory”, which transformed our understanding of the natural world. Chaos showed that seemingly random patterns in coastlines, mountains, and clouds display a hidden underlying regularity. The order behind the unpredictability was something that evaded the traditional way of doing physics and mathematics--new approaches had to be developed. As Predrag Cvitanović implored, “junk your old equations and look for guidance in clouds’ repeating patterns.”
Chaos faced an uphill battle as it upended patterns of thought that were established through centuries of wisdom. But it gradually won, eventually changing how the project of science was conceived across a number of fields from turbulence to ecology to human health.
So the story goes.
Much of the development of chaos theory was explicitly motivated by the fractal-like appearance of clouds. The discovery that kickstarted the field, by Edward Lorenz, was based on an attempt to simulate atmospheric turbulence. But I’d bet that today you could make it through a whole Ph.D. in atmospheric science without learning about the basic insights behind chaos theory at all. Rather than “junking” the old equations, the field of atmospheric science has spent the past four decades doubling down on the old equations, requiring ever-larger computer simulations in the process. I believe this is a mistake.
It’s not that the insights from chaos theory turned out to be wrong, or even that they were entirely ignored. Ensemble-based forecast systems, where weather forecasts are run repeatedly with slightly different initial conditions, are widespread today. They are necessary because of the “butterfly effect,” which is the fact that weather predictions change drastically even with tiny changes to the initial conditions: caricatured by statements like “the flap of a butterfly’s wing in South America might cause a tornado in Kansas a week later”. This property of “sensitive dependence on initial conditions” is a general phenomenon across chaotic systems, and it’s often seen as the primary insight from chaos theory.
But I’d argue an even more important feature of chaotic systems is called “scale invariance”, and that this feature is not appreciated nearly enough, especially in atmospheric science. One reason, perhaps, is that scale invariance is most commonly seen as a geometrical property that doesn’t clearly relate to physical mechanisms or dynamics. Here, I want to describe why this is not true. First, though, let me describe what scale invariance is, and this is easiest through geometry; we will return to dynamics later.

The basic property that makes a fractal a fractal is that there is some pattern that repeats at smaller and smaller scales. This is specifically a relationship between the structures at one scale and the structures at the next smallest scale: for the above Sierpinski triangle, the smaller triangles are a direct copy, scaled to 1/3 the size, of the larger triangles. The next set of triangles are again a direct copy, scaled down once more by the same factor of 1/3.
The key point is that the structures at each scale are defined by reference to the structures at another scale. And those structures, in turn, are defined in relation to yet another scale. What defines the fractal is not the overall size of a given triangle, but the relationship between triangles of different sizes. And it is the same relationship regardless of how large the starting triangle is—above, every set of triangles is precisely 1/3 as large as the next largest set of triangles. This factor of 1/3 applies independent of scale. This is scale invariance: structures of one size are only defined relative to structures of another size, and the relationship between sizes applies equally to all sizes.
An object is scale invariant when the relationship between the patterns at one scale and another scale is the same, regardless of the original scale chosen.
Scale invariance is the basic property that makes a fractal appear visually striking. It also provides a framework to quantify fractals by specifying the relationship between sizes—in the example above, the factor of 1/3. In the atmosphere, geometrical scale invariance is widely observed, but this is not necessarily all we care about. Describing how or why a cloud forms is, in many ways, more important than simply describing its shape. We want to know what the physical mechanism is that is responsible for cloud formation.
If we observe that cloud shapes are scale invariant, what does that tell us about the mechanism that is responsible for cloud formation? In short, scale invariance in cloud shapes implies that the mechanism by which clouds are created is also scale invariant. Let me explain what this means by using a specific example: the sizes of different clouds as viewed from space. A typical satellite sees something like this:

Here, the satellite image has been simplified to just identify cloudy pixels (white) and non-cloudy pixels (blue). From images like this, we can observe a geometrical property: how often clouds of a given size are observed, which is called the size distribution. From this geometrical property, our goal is to infer something about the dynamical mechanisms that cause clouds to form. For example, clouds are often caused by rising air (updrafts). What can we say about updrafts from what we see?
Let’s consider the most intuitive case first: that large clouds are created by one mechanism, small clouds by a different mechanism, and so on. For example, convective updrafts tend to be roughly 100m wide, and so they tend to create clouds that are roughly 100m wide. Thunderstorms typically span a much larger area, so they must be created by a different type of updraft, which we call deep convection. Yet larger systems are created by synoptic-scale dynamical mechanisms called fronts. This intuitive approach makes an important assumption: a single mechanism can only operate over a narrow range of scale, and therefore produce clouds of a narrow range of sizes. This has been called the “scalebound” assumption.
It is often taken for granted that atmospheric dynamics are scalebound. If true, it would imply that the study of fronts is qualitatively different than the study of cumulus clouds; and indeed, scientists typically specialize in one or the other. The scalebound perspective is often taught in introductory meteorology by referring to figures that explicitly separate various phenomena as a function of their size such as this:

What would such a hierarchy of mechanisms imply for our satellite-observed cloud sizes? I think it would be something like the following. Since thunderstorm sizes are of order 10km, the distribution of cloud size should display a “mode” where cloud widths of order 10km are frequently observed. We should then see a different “mode” where we frequently observe clouds with widths of order 3000km, corresponding to frontal systems. Dynamical features of other sizes, like convective systems, would produce modes at various other sizes.
Crucially, in the scalebound view each individual mechanism can only operate at some narrow range of scale. Deep convection, which produces thunderstorms, does not occur at scales of thousands of kilometers. We should see clear peaks in the distribution corresponding to each mechanism, likely with gaps in between. Perhaps something like this:

I’ve drawn this so that thunderstorms are more common than fronts, but there is not necessarily any specific relationship between the two if they are created by different mechanisms. The number of fronts need not depend the number of thunderstorms. The true range of cloud sizes is also much wider than I’ve drawn, reaching at least 10s to 100s of meters and arguably centimeters. To explain all of this variation we require many more mechanisms than I drew above.
What do we actually observe? Here is a distribution of cloud widths derived from satellite data:1

Rather than a hierarchy of different modes corresponding to different mechanisms, the distribution is much more continuous: in the log-transformed plot, it is nearly a straight line. In fact, the straight line indicates the distribution is scale invariant.
Is it possible that this straight line could be caused by a hierarchy of distinct mechanisms? In principle, yes, but this is very unlikely. We would need to suppose a different mechanism for every possible scale. And somehow, the number of clouds produced by each mechanism would have to be “just so” such that the resulting distribution is a straight line across vastly different scales—across vastly different mechanisms. This last requirement is quite problematic. Why would the relative frequency of fronts to convective systems be exactly the same as convective systems to thunderstorms, if each was created by a different set of dynamics? And because we find straight lines over many orders of magnitude, a very large number of mechanisms, spanning very different spatial scales, must be “finely tuned” in this way.
There is a simpler explanation. What if we drop the scalebound assumption, that a given dynamical mechanism can only produce clouds of a single size? What if the mechanism by which clouds are produced acts at any scale, like the rule that produced the Sierpinski triangle above?
Specifically, what if there was a law like the following:
All clouds eventually split in half to produce two new clouds. The total area of the smaller new clouds is equal to the area of the original larger cloud.
If cloud dynamics looked like this, we would only need to explain how the largest clouds were created. Once we had large clouds, the law above would create a “cascade” where the large clouds would create smaller clouds, which would then create smaller clouds, and so on. All possible cloud sizes would eventually be produced, and their distribution would precisely match what is observed. We’ve gone from a large number of finely-tuned mechanisms to just two—a mechanism that creates large clouds and a break-up mechanism.
Notice, in particular, that the break-up mechanism does not apply to any single scale. Instead, it is a relationship between scales: it is scale invariant. Thus, if we observe the property of scale invariance in cloud geometry, in all likelihood the mechanism by which clouds are created is itself scale invariant. The implication is that our categorization by size (thunderstorm, mesoscale convective system, front) is at best arbitrary or artificial and at worst a misrepresentation of the dynamics.
From my experience, I believe the scalebound assumption is widespread in atmospheric science but mostly implicit. I think many scientists would agree that interactions between scales are an important feature of our atmosphere. There are also various widely accepted theories of atmospheric dynamics that require interaction across at least some range of scales (though not all).
However, I don’t think the full implications of scale invariance are widely appreciated. The structure of the scientific field itself is scalebound. University courses are usually divided by scale—courses will be named “synoptic meteorology” or “mesoscale meteorology”. These terms are defined solely by the scale of the dynamics. Scientists usually specialize in dynamics occurring at a particular scale, and this “scale specialty” is the first thing people say when introducing their work. Even some job postings and entire research labs have a specific scale in their very name.
Perhaps the most important and widespread scalebound assumption today is in climate modeling. At present, climate models can only simulate motions that are larger than ~30 km due to computational limitations. There is a massive push to decrease this limit to ~3 km. It would not be an exaggeration to say this is the primary focus of the entire climate modeling community, and we are spending hundreds of millions to make it happen.
How is this expense justified?
There is the well-founded hope that this increase in resolution represents a quantum jump in climate modeling, as it enables replacing the parameterization of moist convection by an explicit treatment.2
This is explicitly claiming that moist convection, which produces thunderstorms, only operates between 30 km and 3 km. Anything larger than this range already has an “explicit treatment” in our current models, and anything smaller than this range will still require “parameterization”, i.e. it will not be simulated by the new models.
The implicit claim is that there is some unique set of dynamics occurring at ~10 km that do not occur at ~100 km, and that simulating these will enable a “quantum jump”. This contradicts scale invariance, because a scale invariant mechanism does the same thing at 10 km that it does at 100 km. There is no “quantum jump” in the dynamics: increasing resolution will mean simulating more of the same dynamics.
To be clear, it is entirely possible that the new model generation will indeed perform better, even as the dynamics are truly scale invariant. This could happen because of other improvements to the models beyond finer resolution. It could also happen because the dynamics of the model have some scale-dependent bias, and finer resolution is required to compensate for this bias. Climate models are not the real atmosphere, and it is well known that they have other compensating biases that are similarly artificial.
But if the models do improve, our explanations of why the models improved would be wrong. The motivation for these models is that finer resolution is necessary to simulate clouds. We even call these models “global cloud-resolving models”, exposing our implicit assumption that cloud dynamics are scalebound. This is not supported by data.
As a scientific community, I believe we should take the lesson of cloud shape seriously: that the observable atmosphere, as well as its dynamics, are scale invariant. We should interrogate our mental and computational models and see if we are making unjustified scalebound assumptions. And, the part I’m most interested in, we should ask how scale invariance might inform an entirely different way of modeling weather and climate.
What does scale invariance really mean for the shape of clouds?
On form and pattern in fractal clouds
What does scale invariance look like?
How to visualize scale invariance
1This and other examples of scale invariance in cloud shape is a quite robust finding; see my publications on the subject (link) (link) (link) or some of my other favorites (link) (link) (link)
2(link)