Friday, February 15, 2013
Room 310 (Hynes Convention Center)
Leonard A. Smith
,
London School of Economics and Political Science, London, United Kingdom
Unlike many scientists, most decision makers are familiar with environments full of uncertainty, ambiguity, and conflicting values, all bathed in the voices of vested interests. When informing a policy debate, scientists must decide how much credibility to draw from the unreasonable effectiveness of the scientific enterprise while making clear the limited benefits today’s science can be expected to deliver. Science is most often unreasonably effective in applications that allow trial and error, experiment and evaluation. In applications like weather forecasting we can learn from our mistakes, as the limits of our theories and our models are laid bare regularly by new observations, sometimes daily. The decision-relevant information provided by such “weather-like” applications differs significantly from that in situations where little or no out-of-sample evaluation data will ever be available, due perhaps to the long time-scales involved, the uniqueness of each realization… Climate prediction, where the life-time of a state-of-the-art model is much less than the lead-time of a single forecast, is a prime example of these “climate-like” applications. Both the beauty and the benefits of “weather-like” science differ fundamentally from those of “climate-like” science. Noting this difference may play into the hands of those with vested interests, yet failing to do so risks undermining the trust essential for policy-relevant science to remain credible.
The communication of uncertainty and risk is a key difference here. In weather-like applications, the communication of uncertainty is well developed: the communication of probabilistic information is of proven value. This is not the case in climate-like applications, where there can be widespread agreement that the risk is grave with little or no agreement either as to the details or as to how to characterise our uncertainty in those details.
How is a climate scientist to communicate that we have both (deep) insights and (very) limited quantitative answers to a fundamental question of policy interest? Addressing this question touches on the reluctance of scientists to respond to direct requests from policy makers, and on-going attempts to design climate experiments to inform the questions of decision makers rather than the questions of climate modelers themselves. The importance that improving the communication of uncertainty holds for the continued credibility of science and the continued belief in the unreasonable effectiveness of the scientific enterprise will be stressed.