There seems to be an especially high amount of uncertainty and inconsistency regarding regional weather predictions for the next few days, specifically regarding the potential for snow in the Salish Sea basin this weekend. Even Cliff Mass has a (very good) current blog post on the subject titled "Uncertainty". So, if Dr. Mass is publicly stating that we don't know if it's going to snow or not, I'm sure not going to try to second guess him. However, I thought this would be a great opportunity to look into why it is that with all of our sophisticated weather monitoring and modeling, we still really can't give the quality of predictions that people would reasonably like.
Part of the answer is one that nobody really wants to hear. And that is that not only can we not effectively predict weather much more than a week in advance on the best of days, we may not ever be able to predict weather much more than a week in advance.
Rita Mae Brown (not Albert Einstein, or Benjamin Franklin, or whomever) once stated that doing the same thing over and over again, but expecting different results, was a definition of insanity. Which of course is not true; if I flip a coin five times and get "heads" each time, unless it happens to be a double-headed coin I can reasonably expect that the next time I flip it I will have a 50% chance of getting "heads" and 50% chance of getting "tails", more or less. In meteorology the corollary would be, "beginning with identical initial parameters and expecting differing end results is a fundamental principle of chaos theory".
Meteorologist Edward Lorenz, using the first primitive digital computers to analyze atmospheric modeling data, was the first to recognize this. His initial discovery was accidental. Having failed to record the results of several days of data modeling, he simply reran the initial data to duplicate the lost results, and found that the results were now dramatically different. At first he thought that he had either entered the initial parameters wrong, or else there was an error in the program itself, but this proved not to be the case. The only anomalies were infinitesimally small rounding differences, but these were enough to completely change the end results, which were only a few days "downstream" of the initial parameters. Here is a graphic representation of Lorenz's original data streams, the "rerun" stream overlayed on top of the original stream:
In 1963 Lorenz published his findings under the title Deterministic Nonperiodic Flow, and came to refer to this divergence of result as the "Butterfly Effect", based on the inescapable conclusion that the flapping of a butterfly's wings in Brazil could create a tornado in Texas. And thus was born the entire discipline of chaos physics.
Lorenz's early computer modeling only allowed for six initial parameters, but the underlying physics remain the same. Infinitesimal changes in the atmosphere completely alter the outcome of atmospheric modeling only a few days into the future. Which means that the very most sophisticated modeling, with the best supercomputers which can be imagined, using perfect data from radar and satellites and instruments which have not even been conceived yet, will never be able to accurately predict the weather more than about a week into the future. And in some cases, such as what we have in western Washington this week, rather less than that.
Which means that no matter how sophisticated our weather prediction becomes, we may always, occasionally, have to shovel 6" of partly cloudy off of our sidewalk.
No comments:
Post a Comment