Bjorn Lomborg is at it again. A few years back, he made waves with The Skeptical Environmentalist. In it, Lomborg evaluates climate models according to the standards of economic theory, with the punchline that the uncertainties in climate models can't be made to behave enough to cast useful predictions. Now, with Cool It, Lomborg is suggesting that even if climate change is happening, the proposed counteractions are too expensive, particularly given the uncertainty about their efficacy.
One difficulty in economics is in deciding how much to value the future. Here's a concrete example from Lenny Smith, a statistician at LSE. Last year, Lenny's favourite pub flooded. The pub's been around for centuries and it's flooded before, but this was a bad one. The kitchen was ruined. The owner, knowing Lenny has an interest in climate change, asked him a simple question: when I rebuild my kitchen, should I take this opportunity to upgrade my kitchen, or am I going to lose my investment in another flood next year? There's real money at stake, and a decision has to be made without delay. The key to making a good decision is this: what's the discount rate? How many years will be available over which to amortize this investment? Obviously, the time window on economic decisions is crucial. If you just care about the next four years, you get a different answer than if you're planning for the next twenty. In the next four years, it's unlikely that there will be another flood of the same magnitude. But in the next twenty, there might be, especially if the climate changes rapidly. On an even longer scale, say a century, there will almost certainly be another devastating flood at Lenny's pub whether the climate shifts or not. The uncertainty introduced by climate change is how long between disasters?
Discount rates are one tool economists use to constrain their models. This allows them to ignore very long-term, low-likelihood events in order to focus on the here and now. For the most part, this makes sense. When deciding next year's NASA budget, we heavily discount the possibility of an Armageddon-style comet strike. In the long run, it's almost a certainty that there will be a comet strike, but the likelihood of it happening in the next century is quite low. It makes sense to keep thinking about comet strikes, and what we might do about them, but it doesn't make sense to devote a large fraction of our resources to it.
Lomborg's reasoning about climate change is slightly more subtle than this. He's not saying it isn't happening or even that it won't have drastic effects. He's saying climate change will be disadvantageous to some and advantageous to others, and that given our limited resources, we should devote them to problems we know we can fix -- like malaria or cholera. The first point is debatable at best. It's surely true that life on earth isn't threatened by climate change. And some of Lomborg's particular claims may also be right: malaria might disappear from the African sub-continent. But it's important to realize why this might occur: all of the water will have evaporated, turning the fertile lands into desert. Desertification isn't quite the cure most people are looking for. As to the second claim, that we should spend our resources on cost-effective measures, it's hard to argue with the premise, but (again, riffing on Lenny Smith), there's no point in curing AIDs if there aren't any people around to not have it. We have to balance the short and long term, and climate scientists are essentially saying we're discounting too much.
The trouble with Lomborg -- and many climate critics -- isn't that they're wrong about the uncertainties about the future, or the likely ineffectiveness of particular proposed measure. It's that the stakes are too high, and their arguments are being co-opted into a policy of non-action. The Bush administration refuses to sign onto Kyoto because its demands are ineffective. Very well, but he isn't proposing an alternative. In fact, while arguing that we need more information, he's cutting public funding to the National Science Foundation.
But Bush and Lomborg are right about one thing: weather and climate models are tough. It's hard to know what to think about them. Worse, it's hard to know how to think about them. That's where philosophy is supposed to help. It's commonplace that climate models have a lot of uncertainty in them. But what's uncertain? What goes in? What happens in the middle? Or what comes out?
The distinction is crucial. To take a simple example, if I put a glass of water in my freezer overnight, I don't need to know much about the initial conditions to predict that in the morning it'll be frozen (unless my freezer is broken, or I leave the door open, or someone takes the glass out of the freezer, or earth is destroyed by a meteor while I sleep). This is because all initial conditions converge on one outcome: frozen glass of water. The trouble is, climate predictions don't converge. In fact, the results (what comes out) diverge wildly depending on the initial conditions you set (what goes in), or on the parameter values you choose in the model you use (what happens in the middle).
This is exactly what Wendy Parker, a philosopher at Ohio U, works on, and she has some nice things to say. She starts by noticing that models are often developed with a particular circumscribed purpose -- to identify processes, predict within a specified error bound, or simply to describe some phenomenon to a particular audience. Incompatibilities in assumptions or predictions do not necessarily rule out compatibility with respect to any of these purposes. A typical climate model (a physics-based one, rather than just a regression analysis) has initial conditions -- current climate variables, some nonlinear equations with particular parameter values
The trouble with atmospheric science, she says, is that the uncertainties in a given model are often so great that it's difficult to choose a model, even for a proscribed purpose. The "epistemically responsible" choice, at this point, has to be to sample the candidates. This can allow us to find a range of uncertainties or perhaps even a lower bound -- a "non-discountable envelope", to go back to Lomborg. But can we know what's likely to occur?
Woodward has done some work on ensemble explanations, and he suggests criteria for robustness and completeness with respect to some predictive hypothesis H: an ensemble prediction is robust if every candidate predicts H, and it is complete if you know that one of the candidates is actually true. This makes sense, but unfortunately, it just doesn't apply in the case of climate models. In fact, it's more likely that we can show that every candidate is NOT true. Furthermore, the results vary rarely converge on an interesting H. That is, they may all agree the temperature will rise in the next ten years, but the particular predictions may range from 0.1 to 10 degrees.
Again, how do we say what's likely? It's not at all clear that we can just take the probability values that come out of the equations at face value, Wendy says, because they map uncertainty about parameter values onto uncertainty about future climate variables. It's not clear "probability" is even the right category.
As Lenny says, our job doesn't end with running the numbers. We have to explain what they mean. And we just don't have the vocabulary to do that. For a philosopher, that's a call to arms.
One difficulty in economics is in deciding how much to value the future. Here's a concrete example from Lenny Smith, a statistician at LSE. Last year, Lenny's favourite pub flooded. The pub's been around for centuries and it's flooded before, but this was a bad one. The kitchen was ruined. The owner, knowing Lenny has an interest in climate change, asked him a simple question: when I rebuild my kitchen, should I take this opportunity to upgrade my kitchen, or am I going to lose my investment in another flood next year? There's real money at stake, and a decision has to be made without delay. The key to making a good decision is this: what's the discount rate? How many years will be available over which to amortize this investment? Obviously, the time window on economic decisions is crucial. If you just care about the next four years, you get a different answer than if you're planning for the next twenty. In the next four years, it's unlikely that there will be another flood of the same magnitude. But in the next twenty, there might be, especially if the climate changes rapidly. On an even longer scale, say a century, there will almost certainly be another devastating flood at Lenny's pub whether the climate shifts or not. The uncertainty introduced by climate change is how long between disasters?
Discount rates are one tool economists use to constrain their models. This allows them to ignore very long-term, low-likelihood events in order to focus on the here and now. For the most part, this makes sense. When deciding next year's NASA budget, we heavily discount the possibility of an Armageddon-style comet strike. In the long run, it's almost a certainty that there will be a comet strike, but the likelihood of it happening in the next century is quite low. It makes sense to keep thinking about comet strikes, and what we might do about them, but it doesn't make sense to devote a large fraction of our resources to it.
Lomborg's reasoning about climate change is slightly more subtle than this. He's not saying it isn't happening or even that it won't have drastic effects. He's saying climate change will be disadvantageous to some and advantageous to others, and that given our limited resources, we should devote them to problems we know we can fix -- like malaria or cholera. The first point is debatable at best. It's surely true that life on earth isn't threatened by climate change. And some of Lomborg's particular claims may also be right: malaria might disappear from the African sub-continent. But it's important to realize why this might occur: all of the water will have evaporated, turning the fertile lands into desert. Desertification isn't quite the cure most people are looking for. As to the second claim, that we should spend our resources on cost-effective measures, it's hard to argue with the premise, but (again, riffing on Lenny Smith), there's no point in curing AIDs if there aren't any people around to not have it. We have to balance the short and long term, and climate scientists are essentially saying we're discounting too much.
The trouble with Lomborg -- and many climate critics -- isn't that they're wrong about the uncertainties about the future, or the likely ineffectiveness of particular proposed measure. It's that the stakes are too high, and their arguments are being co-opted into a policy of non-action. The Bush administration refuses to sign onto Kyoto because its demands are ineffective. Very well, but he isn't proposing an alternative. In fact, while arguing that we need more information, he's cutting public funding to the National Science Foundation.
But Bush and Lomborg are right about one thing: weather and climate models are tough. It's hard to know what to think about them. Worse, it's hard to know how to think about them. That's where philosophy is supposed to help. It's commonplace that climate models have a lot of uncertainty in them. But what's uncertain? What goes in? What happens in the middle? Or what comes out?
The distinction is crucial. To take a simple example, if I put a glass of water in my freezer overnight, I don't need to know much about the initial conditions to predict that in the morning it'll be frozen (unless my freezer is broken, or I leave the door open, or someone takes the glass out of the freezer, or earth is destroyed by a meteor while I sleep). This is because all initial conditions converge on one outcome: frozen glass of water. The trouble is, climate predictions don't converge. In fact, the results (what comes out) diverge wildly depending on the initial conditions you set (what goes in), or on the parameter values you choose in the model you use (what happens in the middle).
This is exactly what Wendy Parker, a philosopher at Ohio U, works on, and she has some nice things to say. She starts by noticing that models are often developed with a particular circumscribed purpose -- to identify processes, predict within a specified error bound, or simply to describe some phenomenon to a particular audience. Incompatibilities in assumptions or predictions do not necessarily rule out compatibility with respect to any of these purposes. A typical climate model (a physics-based one, rather than just a regression analysis) has initial conditions -- current climate variables, some nonlinear equations with particular parameter values
The trouble with atmospheric science, she says, is that the uncertainties in a given model are often so great that it's difficult to choose a model, even for a proscribed purpose. The "epistemically responsible" choice, at this point, has to be to sample the candidates. This can allow us to find a range of uncertainties or perhaps even a lower bound -- a "non-discountable envelope", to go back to Lomborg. But can we know what's likely to occur?
Woodward has done some work on ensemble explanations, and he suggests criteria for robustness and completeness with respect to some predictive hypothesis H: an ensemble prediction is robust if every candidate predicts H, and it is complete if you know that one of the candidates is actually true. This makes sense, but unfortunately, it just doesn't apply in the case of climate models. In fact, it's more likely that we can show that every candidate is NOT true. Furthermore, the results vary rarely converge on an interesting H. That is, they may all agree the temperature will rise in the next ten years, but the particular predictions may range from 0.1 to 10 degrees.
Again, how do we say what's likely? It's not at all clear that we can just take the probability values that come out of the equations at face value, Wendy says, because they map uncertainty about parameter values onto uncertainty about future climate variables. It's not clear "probability" is even the right category.
As Lenny says, our job doesn't end with running the numbers. We have to explain what they mean. And we just don't have the vocabulary to do that. For a philosopher, that's a call to arms.
2 comments:
Why then would we commit to a hugely expensive course of action before knowing what the probabilities are? Given that the worst results are a century out, a decade of scientific seems reasonable to me...
My call to action was to philosophers to reconceptualize climate change predictions, which probably is not hugely expensive.
But since you raise the issue of broader action, a decade of scientific research seems reasonable to me too. I'd like climate and energy research incentives on the scale of defense research. Or better: Apollo.
The market will demand greater commitment. The smart move is to stay in front. That's where government can help. But that's a post for another day.
Post a Comment