News & Events

Can weather forecasts be improved with measurement error information?

Jemima M. Tabeart is an NCEO-funded PhD student at the University of Reading at the Maths of Planet Earth Centre. She describes her work published last month in Numerical Linear Algebra with Applications.

How can measurement error information help us to improve weather forecasts?

Accurate knowledge of the current state of the atmosphere and ocean is vital to produce accurate weather forecasts. These initial conditions are propagated forward by a numerical model, taking into account known laws of physics to obtain a forecast for the next few days. But how do we obtain this first state of the atmosphere?

We want to make the most of all the different types of information we have available: previous forecast information (also called the “background”), and more recent measurements of the world around us (observations). The process of combining these two very different types of information is known as “data assimilation”. An important part of this data assimilation procedure is how much importance we give to each set of information. In practice, we weight the contribution of the observations and the background by their respective errors. This sounds simple, but calculating sensible values for these errors is an active area of research.

As well as having accurate initial conditions, we need our calculations to be quick. Numerical weather prediction centres, such as the UK Met Office, produce new forecasts every 6 hours, so our data assimilation step has to be fast. In this paper we investigate a problem that will allow scientists to better understand how to balance more accurate error information, while ensuring our computation is not too slow.

In particular we focus on measurements with errors that are related to errors of a different measurement. For certain types of measurement, in particular satellite observations, it has been shown that errors between different individual measurements are likely to be related, or correlated. One example of this could be a set of rain gauges that have the lines printed 5mm too high – all measurements from this group of instruments will underestimate the amount of rain, and mean that when we calculate our initial conditions, they are likely to be incorrect unless we make use of this extra information. Using correlation information in our prediction system may allow us to predict smaller scale features such as fog, and in the future allow for higher resolution forecasts on a local level. But it is considerably more expensive to use this information in our system.

Our results show that a particular structural property of the observation error correlation information is important in determining how quickly our computation is carried out. This was shown both mathematically, and in numerical tests, and corresponds with previous practical conclusions from the Met Office system. Our conclusions here will help us to develop ways to include most of the correlation information without making it too expensive to be used in practice. In fact, such methods are already in use at weather prediction centres. The theory developed in this paper will allow us to understand these methods from a more rigorous standpoint, and make better, more informed choices about how they should be implemented for the best balance between speed of computation, and more information.

ESA AOES Medialab MetOp-B image

Jemima M. Tabeart is an NCEO-funded PhD student at the University of Reading on the Maths of Planet Earth Centre Doctoral Training. This work was published in Numerical Linear Algebra with Applications on 23 February 2018.

The journal article is available here: The conditioning of least‐squares problems in variational data assimilation.