Lessons Learned from
Successful Monthly Forecast for July 2004

Huug van den Dool

Outline

  1. Introduction

  2. The Question

  3. Discussions

  4. Caveats

  5. The Lessons

 
Figure 1

Introduction

The monthly forecast for July 2004, both at 2 week lead and revised (UPD) at the end of June, was highly successful, featuring scores that had not been seen in years. [Fig. 1 and Fig. 2 including inset Tables]. Especially striking was the contribution from below normal temperatures to the overall skill. The two sets of maps show observed anomalies (numbers plotted at stations) and predicted anomalies (contoured probabilities). For the formulation of  predictions, CPC uses the traditional terciles (named B, N, A) which occurred, by definition, 1/3rd of the time over 1971-2000.  But we have observed that the B class has occurred only about 20% of the time nationwide over 1995-2003, far below expectation, a clear sign of recent trends towards a warmer climate. This has made forecasters shy to shift probabilities into the B class. But July 2004, and summer 2004 as a whole, was rather cold, and this makes the situation more remarkable.

(Return to Top)

The Question

The question to be answered is whether we can learn any lessons from this success, lessons that could be applied in the future. Also lessons with application about science and technology that need infusion.

(Return to Top)

Discussions

By mid-June 2004 a ‘pattern’ had emerged. Forecasters are always looking for a magical pattern that shows staying power. More precisely a distribution of anomalies in the atmosphere that by virtue of the right low-frequency teleconnections is stable and has a good chance to persist. One cannot extrapolate patterns over a long time if they are transient. Forecasting is, surprisingly, often a matter of persistence. The trick is to know what to persist. There are no real recipes to do this, especially not in real time when a temporal filter centered at t=0 cannot be executed. By mid-June a pattern had emerged that may be described as a warm west, a cool&wet central and (to lesser degree) eastern US. This pattern had not been in place very long, i.e. persistence from mean May or mean MAM conditions would not have worked. Moreover, the wk1 and wk2 forecasts available in mid June indicated more of the same, another element of believing we are in a pattern. Especially noteworthy would be a reinforced wet soil in the central US just before July begun. That would be another reason for cold temperatures due to local evaporational effects (Fig.3). Over the entire interior US end-of-June soil moisture is correlated at the –0.3 to -0.45 level to temperature in July. These considerations were expressed in the PMD (prognostic map discussion). Of the long lead tools (which know nearly nothing explicitly about the pattern I just alluded to) many indicated a warm west, and equally many a cold&wet central and eastern (to a lesser degree) US.  The cold tools in parts of the eastern 2/3rds of the US include the long term trend tool OCN, and the tools that use antecedent soil moisture,  especially Constructed Analogue on soil moisture (CAS). We dispell a myth here that the trend is for warmer conditions in the US uniformly. While this may be true by and large, it is least true east of the Rockies in late spring early summer, see Fig.4, where MJJ temperatures averaged over the last 10 years are shown to be lower than the official climatology over a large area of the north-central states. This tendency can be seen starting in FMA, maxing out in MAM and MJJ and exiting in JJA. Maybe 2004 was holding on to early summer patterns a bit longer. 

 There is a long tradition in wondering whether we may know ahead of time about the good and bad cases. i.e. the issue of forecasting forecast skill. This has now been subsumed in ensemble prediction and the probabilities derived thereof. If we had been all that certain about July 2004, we should have given much higher probabilities. The fact is we did not. So we were either uncertain or we were fundamentally conservative and kept probability shifts in line with average skill. This may be better in the long run for certain scores, but it does prevent us from hitting an occasional home run.

(Return to Top)

Caveats:

We also make two conservative comments about ‘skill’.

 First, if someone makes random forecasts, a verification would show skill scores that go up and down over time around the mean level of zero skill. Because the US has only few degrees of freedom, especially on temperature, the variation of skill around the expected zero level is very large. On occasion our random forecaster gets very high scores. On those occasions one may think the forecast actually has skill, or a long awaited improvement/investment is finally paying off. The only way to determine skill is to accumulate enough cases and rule out that the mean skill differs from zero only by chance. It may take time to recognize random forecasts for what they are.

 Second. CPC’s monthly forecast has modest skill on average, lets say 0.15 on a scale from zero (random forecasts) to 1.00 (perfect). The way +0.15 comes about as an average is that there are a few very good forecasts, many mediocre forecasts and some bad forecasts (negative score; the opposite anomalies would have scored better). The number of very good forecasts outweighs the number of very bad. Some of the comments made about the random forecaster still apply. Skill varies from case to case, and the range of variation has to do with the degrees of freedom (a quantity not under our control). Celebrating on those instances of very high skill may be premature. Mourning on the occasion of a very bad forecast is equally unnecessary, and management should not despair.

(Return to Top)

The Lessons

1) R&D on the idea of semi-stable anomaly patterns. How to recognize? Can this be formalized?

2) Increased attention for soil moisture based tools. While their skill level is no ENSO (in general), it does not take teleconnections to make the local effects of soil moisture felt. Real time soil moisture estimates, developing long data set (decades) validation, developing prediction tools thereof….

(Return to Top)

 

Back  Top

Climate Prediction Center,    Environment Modeling Center
Climate Service Division,     Hydrology Laboratory

National Weather Service
Office of Science and Technology
1325 East-West Highway
Silver Spring, MD 20910
Page Author: Science and Technology Infusion Plan for Climate Services
Page last modified: 14-Jan-2005
Disclaimer Privacy Policy