JFF2009 report section
- Assessing whether resolution helps: some processes can only be represented with higher resolution, but others are not necessarily helped by higher resolution (e.g., Warren Washington's talk).
- For example, examine high resolution model output. Are hurricanes, or the diurnal cycle, or the distribution of precipitation better represented? If so, then resolution may be the answer. If not, the model has other errors that should be understood and remedied. For example, Geoff Vallis's talk showed that precip over western India could be improved by higher resolution, but higher resolution did not help in other areas.
- There are some processes where high resolution is necessary, but not sufficient. These lie between the weather and climate time and space scales. For example, the MJO - need high resolution to get it at all, but need other experiments or ways of analyzing the high-res data to understand it and its sensitivities.
- Run one "benchmark" very high-resolution Earth System Model (T2047, 10 km, ideally finer), highest complexity possible, at least 1 decade, deterministic, cloud-permitting, with CO2 forcing
- This run can be used as boundary condition for even higher-resolution experiments on phenomena that have challenges
- This provides a way to improve parameterizations by understanding the processes at high resolution and transferring them to lower resolution.
- The climate community should take advantage of the wealth of short range, higher res runs being done in NWP.
- Use a clever mathematical combination of local analogs (e.g. from the various reanalyses) to estimate sub-grid scale dynamics in the climate model (take the climate forecast state and project onto local patches of reanalyses), inspired by Lorenz, vanden Dool, D'Andrea and Vautard (2000), echoing Prashant Sardeshmukh's sentiments.
- Examine initial tendencies over many cases to show the differences between your analysis and model. If nonzero when averaged over many cases, there are model errors. This approach can be used to diagnose where (i.e., which parameterization) these errors come from. (Klinker and Sardeshmukh 1992; Rodwell and Palmer 2004, Danforth et al. 2007)
- Ability to use data assimilation tools available in NWP models to test the models, also to apply them to climate models
- new ways of parameterizing (e.g. neural networks) might be computationally expensive compared with what is currently in use (Schmidt and Lipson 2009)
- Need for both high-resolution and ensemble approaches
- Now, a high-resolution run might be able to actually resolve extreme events (hurricanes, floods, etc.) BUT, if we run a single high-resolution run for 100 years, we can't get valid statistics on how often a 100-year (or 500-year) event happens, or how its frequency might increase in a changing climate
- We could potentially get this information from an ensemble
- Also need ensembles to assess uncertainty in projections. Ensembles can be used to assess model error as well (which is currently not considered).
- Can we define how big of an ensemble is needed to accurately span the uncertainty range?
- At higher resolution, do we need even bigger ensembles, because the phase space is larger? - This provides a big challenge for infrastructure, data management, etc. (refer to section on scientific management).
- Recommendation: High-resolution simulations, in and of themselves, need to be complemented with lower-resolution runs, simplified models, conceptual models, and theory.
- To improve parameterizations, compare high-res to low-res runs
- Need to compare with conceptual or simpler (i.e., a hierarchy of models) to develop fundamental understanding (see Lorenz quote below)
- Insights gained by studying simple/conceptual models need to be tested in more complex models (refer to "Alternatives - Idealised" section)
A quote Edward Lorenz's paper "Maximum Simplification of the Dynamic Equations." Tellus, Vol 12, Num 3, August 1960.
... In order to make the best attainable forecast of the future weather, it would be desirable to express the physical laws as exactly as possible, and determine the initial conditions as precisely as possible. Yet the ultimate achievement of producing perfect forecasts, by applying equations already known to be precise, if such a feat were possible, would not by itself increase our understanding of the atmosphere, no matter how important it might be from other considerations. For example, if we should observe a hurricane, we might ask ourselves, "Why did this hurricane form?" If we could determine the exact initial conditions at an earlier time, and if we should feed these conditions, together with a program for integrating the exact equations, into an electronic computer, we should in due time receive a forecast from the computer, which would show the presence of a hurricane. We then might still be justified in asking why the hurricane formed. The answer that the physical laws required a hurricane to form from the given antecedent conditions might not satisfy us, since we were aware of that fact even before integrating the equations.
It is only when we use systematically imperfect equations or initial conditions that we can begin to gain further understanding of the phenomena which we observe. For if we omit the terms representing specified physical processes, such as friction, from the equations, or if we fail to include certain observable features, such as cloudiness, in the initial conditions, we may, by comparing the mathematical solutions with reality, gain some insight concerning the relative importance of the retained and omitted features. Of course, in doing so we forgo the opportunity of simultaneously making the best attainable forecast.