Anyone who has been caught in the rain knows that weather forecasting has its limits and is often wrong. Long range weather forecasting is not simply a matter of looking at the upstream weather and waiting for it to arrive. Conditions change over time, height and terrain, so there is a limited time for predictions to be correct. Even so, the correctness of a prediction is assessed by an error bar and being ‘close enough.’ Predictions are reported with probability, admitting the chances of being right or wrong. A favorite prediction shown by TV-meteorologists is the icon-graphic of simultaneous sun, clouds and rain on the same day.
The modern approach to weather forecasting is to use the governing equations of fluid mechanics, temperature and concentration solved on a 3D spatial grid and a 1D time grid. This requires knowledge of conditions over a wide area and height and a supercomputer to crunch the equations over every patch of land on the grid. In the end, the quality of the model depends on how often the measured conditions update the inputs, and how the information on the grid is extended between grid increments. Better predictions are expected as more detail is enabled by a larger number of measurements and a denser 4D grid (the push toward denser grids). Competing models benefit from the same advances in computer hardware but the methodologies built into the software are different and that is where predictions vary and probability is reported. The methods have to be improved rather than just requesting a better supercomputer to refine an existing method.
The problem with this is the underlying nature of the governing equations are non-linear and the small differences in measured conditions can be amplified. This effect has been demonstrated on simplified atmospheric models, such as the Lorenz system (see the Chaos project). The so-called Lorenz system is a set of three atmospheric equations, one for the first mode of air flow, one for the first mode of temperature and the last for a second mode of temperature. The Chaos project showed how the adoption of duals arithmetic allows error to evolve and this alters the expected pattern in a deterministic way. It was also verified that the zero-error calculation using duals arithmetic collapses to the historical pattern so the degree of error determines the pattern of the attractor.
The Weather Forecasting project was devised to implement duals arithmetic on larger scale problems. The first implementation is to access an existing forecasting calculation through collaboration. This essentially is a system of software and hardware that manages known input conditions, past history of conditions and predictions followed by a brute-force calculation of weather on the 4D grid. Postprocessing is also an important component as answers are rendered and formatted for graphical reporting.
The alternative implementation in this project is to re-derive the governing equations that lie at the core of the calculations. This means fully documenting simplifications each model adopts to convert the differential equations, such as Navier-Stokes, into a calculable form. There are many dovetails to sequentially pass information and occasionally a step of simultaneous solution is used. Once these theoretical schemes are known, the derivation can be re-done with the duals arithmetic format. It is expected that the simultaneous error channels will generate a new set of equations, some as alterations of past equations (center dynamics) and some new error equations (error dynamics). This effort has been completed for the simplified atmospheric model (no spatial grid, 1D time grid) of the Lorenz system (each dual has a 7D error vector) and for a Concentration-Lorenz system.
The result will be that each point in space has a more sophisticated representation and requires more memory and computation time. But this arrests the ‘push toward denser grids’ and requirement for higher computer power, extracting better information from the existing conditions and 4D grid. It also eliminates the need to re-run the computation (re-runs by the Monte-Carlo method) with slightly varied inputs because the duals representation has variations built-in as ‘errors’ and the duals arithmetic faithfully calculates these variations simultaneously and in parallel.