Close

Blog /

The chaotic nature of the atmosphere system is especially significant in seasonal forecasting, in which a little deviation of the same initial conditions may yield completely different forecasts. These forecasts, still possible because large-scale variability can be studied in terms of atmospheric predictors, force meteorologists to resort to ensemble forecasts.

Ensemble forecasts do not consist of a single prediction but a set of N possible *realizations* (also named *ensemble members*) which, ideally, account for the forecast error. Of course, the goodness of these forecasts is no longer determined solely by their accuracy, which at the same time (and as they are probabilistic) requires another definition than deterministic forecast.

In this blog post, we will delve into 2 key attributes of ensemble forecasts: **reliability** and **uncertainty:**

*Reliability*is the average agreement between the forecast values and the observed values: if all forecasts are considered together, then*overall reliability*is the same as the*bias.*Thus, in a good ensemble forecast, the members should be gathered around the most probable or the actual state that will be observed.*Uncertainty*, which can be both epistemic or aleatoric, should be captured by the spread of the members such that the higher uncertainty (the more difficult the forecast), the more dispersed the members are. Hence, for a good ensemble forecast, the ensemble spread should match or properly represent the forecast error.

To assess these attributes, we will explore the use of a powerful tool for visualization and verification: rank histograms. A **rank histogram**, also known as verification rank histograms or Talagrand diagrams, serves as an assessment tool to evaluate the reliability of an ensemble prediction system.

While the ensemble aims to represent the set of all possible outcomes (mimicking the distribution of observations), inherent system flaws often hinder this representation. Rank histograms play a crucial role in identifying such flaws. Let’s first see how a rank histogram is constructed; we will follow these steps:

- Collect the set of forecasts produced by the ensemble for some variable. This would give a list,
*e.g.*[2.3, 5.0, 1.6] for a 3-member ensemble. - Order the forecasts in ascending order. So now the list would be [1.6, 2.3, 5.0].
- Define N + 1 bins, where N is the number of ensemble members. The bins in our example would be:
- Bin 1 for values below 1.6.
- Bin 2 for values between 1.6 and 2.3.
- Bin 3 for values between 2.3 and 5.0.
- Bin 4 for values above 5.0.

- Assign the observed value in the appropriate bin. If the observation is 4.0, we would place it in bin 3.
- Repeat this process for a large number of (observation, forecast) pairs.

After this, we can already analyze our rank histogram. We can expect four types of rank histograms: flat, U-shaped, dome-shaped and asymmetric; and each of them has a different interpretation, as explained below.

A **flat histogram** indicates a **reliable forecast**, i.e. the observed distribution is well represented by the ensemble and all ensemble members represent equally likely scenarios.

If the ensemble distribution **underestimates** uncertainty (under-dispersive), we might see a **U-shaped histogram** since many observations fall outside the predicted range, in the first and last bin.

On the other hand, if it **overestimates** uncertainty (over-dispersive), we’ll observe a **dome-shaped** histogram as observations concentrate around the middle ranks.

Finally, a **biased** ensemble distribution lead to **asymmetric** histograms. In this case, the ensemble tends to predict smaller or bigger values than the observed ones. Negative bias causes observations to cluster towards higher ranks while positive bias does so towards lower ranks.

As one can see, this simple yet effective visualization allows us to verify the quality of our forecasts, ensuring whether the members equally represent all the possible scenarios. Besides, they allow us to verify forecasts for grouped periods or locations, as validations in a coordinate-month fashion do not make much sense in seasonal forecasting (unlike, maybe, short-term forecasting).

As we continue to navigate the unpredictable nature of the atmosphere, tools like this will remain invaluable. By embracing the uncertainty and harnessing it, we can make more informed decisions, better prepare for the future, and continue to advance our understanding of the atmospheric system.

At any site around the world. Onshore and offshore.

Main Menu

Main Menu