Model Uncertainty in Climate Change Economics: A Review and Proposed Framework for Future Research

We review recent models of choices under uncertainty that have been proposed in the economic literature. In particular, we show how different concepts and methods of economic decision theory can be directly useful for problems in environmental economics. The framework we propose is general and can be applied in many different fields of environmental economics. To illustrate, we provide a simple application in the context of an optimal mitigation policy under climate change.

1 3 events' occurrence are known, while the notion of uncertainty is broader and refers to situations in which this may not be the case. Most decisions indeed must be made in situations in which some events do not have an obvious, unanimously agreed-on, probability assignment. This might be because too little information is available or because different predictions exist, resulting from different models or datasets, or from different experts' opinions.
The evaluation of climate policy is generally performed using decision criteria that, like the standard expected utility theory criterion developed by von Neumann and Morgenstern (1947) and Savage (1954), do not distinguish between risk and uncertainty but actually reduce any kind of uncertainty to risk. As the treatment of uncertainty has recently received a great deal of attention in climate policy, 1 an increasing number of concerns have been raised about the use of standard techniques, originally developed to deal with risk, in problems involving uncertainty. For example, according to the IPCC (2007, p. 134), In most instances, objective probabilities are difficult to estimate. Furthermore, a number of climate change impacts involve health, biodiversity, and future generations, and the value of changes in these assets is difficult to capture fully in estimates of economic costs and benefits. Where we cannot measure risks and consequences precisely, we cannot simply maximize net benefits mechanically. This does not mean that we should abandon the usefulness of cost-benefit analysis, but it should be used as an input, among others in climate change policy decisions. The literature on how to account for ambiguity in the total economic value is growing, even if there is no agreed standard.
As a result, calls have been made for using alternative tools and methods developed in other disciplines, such as economics and statistics, to deal with uncertainty. For example, Kunreuther et al (2013, p. 447, 449) argue the following: The selection of climate policies should be an exercise in risk management reflecting the many relevant sources of uncertainty. Studies of climate change and its impacts rarely yield consensus on the distribution of exposure, vulnerability or possible outcomes. Hence policy analysis cannot effectively evaluate alternatives using standard approaches, such as expected utility theory and benefit-cost analysis. […] For most issues relevant to policy choices, the solution is to use more robust approaches to risk management that do not require unambiguous probabilities. Risk management strategies designed to deal with the uncertainties that surround projections of climate change and their impacts can thus play an important role in supporting the development of sound policy options.
In a recent Science article of Burke et al. (2016), twenty-eight climate scientists outlined three areas in severe need of research progress on climate change economics. One involves refining the estimates of the so-called social cost of carbon (SCC) to improve the way they are used in policy. 2 To achieve this objective, the authors highlighted different research directions, among which the treatment of uncertainty.
The treatment of uncertainty in integrated assessment models needs improvement, with research needed on the computational challenges of explicitly including decision-making under uncertainty (p. 293).
Background While the treatment of uncertainty has typically not received particular attention in the environmental economic literature, the field is moving forward, and several attempts have been made in the past few years to answer the aforementioned calls. For example, Lange and Treich (2008) and Berger (2016) provide comparative statics results of the role of ambiguity in a simple two-period analytical model; Millner et al. (2013) and Lemoine and Traeger (2016) propose numerical climate-economic models under ambiguity aversion; Berger et al. (2017) consider explicitly the presence of uncertainty about catastrophic climate events in both an analytical model and a numerical application; Athanassoglou and Xepapadeas (2012), Xepapadeas and Yannacopoulos (2017), and Rudik (2020) use a robust control approach developed by Hansen andSargent (2001, (2008) in either analytical control problems or integrated assessment contexts; Drouet et al. (2015) numerically disentangle model uncertainty and risks about mitigation costs, climate dynamics, and climate damages using the results of the most recent assessment of IPCC; Chambers and Melkonyan (2017) compare three alternative decision criteria in climate change costbenefit analysis in the presence of uncertainty; and Bradley et al. (2017) address the uncertainty as presented by the IPCC by applying Hill's (2013) model in which confidence in the different models is not represented by a standard probability measure quantifying the uncertainty, but rather has a qualitative, ordinal structure assessing the confidence in the probability judgements.
This paper In this paper, we review recent decision criteria under uncertainty that have been proposed in the economic literature and apply them to a simple climate change decision problem. While the framework we propose is general and thus can be applied in many different fields of environmental economics, we provide a simple illustrative application in the context of an optimal mitigation policy. Our objective is to offer guidance to policy makers who face uncertainty when designing climate policies. A related study is that Brock and Hansen (2018), who address, with a long term uncertainty perspective, some important climate policy issues by considering recent decision theoretic models.
Framework We consider decision problems in which uncertainty is addressed through models. In this case, uncertainty can be decomposed into distinct layers: (i) aleatory or physical uncertainty (risk), (ii) model uncertainty or model ambiguity, and (iii) model misspecification. 3 Before elaborating on the key distinctions between these three layers, we more closely examine the notion of "model uncertainty" because it may have different meanings depending on the field of analysis. In its colloquial sense, a "model" is generally considered as a stylized representation of a phenomenon of interest that a natural or social scientist wants to study. Models serve as tools that provide a logically consistent way to organize thinking about the relationships among variables of interest and provide clarity on the implications of those relationships (Mäki 2011;Pindyck 2015;Beck and Krueger 2016). In environmental and climate change economics, a distinction is generally made between scientific models (climate and impact models), which explicate the consequences of increased greenhouse gas (GHG) concentrations and emissions on the climate system as well as about the scale and nature of what might happen to lives and livelihoods, 1 3 and economic models, which are used for cost-benefit analysis and policy assessments of alternative actions. A hybrid class of models, known as integrated assessment models, combines the key elements of both economic and scientific models. These models help calculate the SCC or evaluate fiscal and abatement policies. Policy makers then directly use these SCC estimates or evaluations (known as model runs) in cost-benefit analyses of climate change mitigation policies (Stern et al. 2016). There are many models in all the different categories. Each model has its own advantages and limits, its own complexity, and its own key relationships among variables of interest, and parameter values. Model uncertainty in the climate change literature therefore arises from the possibility that different models may provide different responses to the same external forcing (e.g., as a result of differences in physical and numerical formulations; see Deser et al. 2012).
The approach that we follow in this paper is, in part, different. We consider a general decision problem in which consequences depend on the states of the environment that are viewed as realizations of an underlying economic or physical generative mechanism (Marinacci 2015). A model is a probability distribution induced by a such mechanism. It describes states' variability by combining a structural component based on theoretical knowledge (e.g., economic or physical) and a random component coming from, for example, shocks representing minor omitted explanatory variables (Koopmans 1947;Marschak 1953). 4 We assume that decision makers posit a collection of such models, based on their information that might well include the economic and scientific modelling previously mentioned (as next section will clarify). Model uncertainty therefore results from the uncertainty about the true underlying mechanism: within the posited collection, there is uncertainty about which model actually governs states' realizations. However, even after a model is specified, there is still the aleatory uncertainty about which specific state will actually obtain; this is the notion of risk typically considered in economics. Finally, a third layer of uncertainty, known as model misspecification, arises when the true model might not belong to the posited collection of models, reflecting the idea that all posited models have an inherent approximate nature. 5 An important instance of a similar approach in climate change economics concerns the estimations of climate sensitivity-the temperature change in response to increased atmospheric CO 2 (carbon dioxide) concentration-presented in Millner et al. (2013) and Heal and Millner (2014). As these authors mention, while climate sensitivity is an important metric for the study of climate change, it is nonetheless difficult to estimate. Different complex scientific "models" attempt to predict its value but often do not agree with one another. Each scientific model therefore delivers its own probability model for climate sensitivity (see Fig. 1 in Millner et al. 2013). Model uncertainty thus arises from the uncertainty about which is the most correct scientific model or, in other words, about the true probability distribution for climate sensitivity.
Organization The remainder of this paper proceeds as follows: we first present a general decision problem under uncertainty framed in the context of climate change and discuss the different notions of uncertainty. We then present different decision criteria that can help find the optimal climate policy, before applying them to a concrete example. We conclude with a discussion on the status of non-expected utility models in assessing climate policy.

Setup
An important challenge for environmental policy makers is choosing a suitable mitigation strategy. Put simply, policy makers need to decide how much GHG emissions should be allowed to avoid the climate system to reach damaging temperature levels. Reducing GHG emissions is costly but helps to limit the damages of global temperature increases. The cumulative level of GHG emissions that an economy can reach over a given period (e.g., the twenty-first century) is called the "carbon budget". It is a variable that is supposed to be directly under the control of the policy maker and is strictly related to global warming and climate targets (Meinshausen et al. 2009;Drouet et al. 2015). So, this decision (or control) variable-an action in the decision theory terminology-represents a policy that the policy maker can perform.
At this point, it may be useful to better structure the decision problem under uncertainty. Formally, the problem that the decision maker (in particular, a policy maker) faces is to choose an action a among a set A of possible alternatives, whose material consequences c (within a consequence space C) depend on the realization of a state of the environment s (within a state space S), which is outside the decision maker's control. The relationship among consequences, actions and states is described by a consequence function ∶ A × S → C such that In other words, this function details the consequence c of action a if the state that obtains is s. Decision makers have a (complete and transitive) preference relation ≿ over actions that describes how they rank the different alternative actions. 6 The quintet (A, S, C, , ≿) characterizes the decision problem under uncertainty. Before making a decision, ex ante, the decision maker knows the different elements of the quintet. After the decision, ex post, she observes the consequence (a, s) that obtained. 7 Her objective is to select the action â that is optimal according to her preference in the sense that â ≿ a for all actions a ∈ A.
Preferences are often assumed to admit a numerical representation via a decision criterion V ∶ A → ℝ such that for all actions a, b ∈ A . Here V is a numerical representation of the underlying preference ≿ that enables formulating the decision problem as an optimization problem (1) c = (a, s).
In climate policy, for example, in principle the policy maker would desire to set a global temperature increase to a level that maximizes a given decision criterion. Global temperature, however, is not a decision variable under the control of the policy maker. In practice, what the policy maker controls is the level of emissions through an abatement policy that is put in place.

Uncertainty
Decisions about the climate change phenomenon are generally taken in situations of uncertainty. Consider, for example, a policy maker who must choose the optimal emission pathway to be followed by an economy. It is reasonable to expect that the policy maker does not know, for example, how the climate system-in particular, global mean temperatureswill respond to the targeted level of emissions or how an increase in the global mean temperature will affect the socio-economic system. In that sense, the selection of the optimal action (level of emissions) is an exercise performed in a situation of uncertainty. The classical study of decision under uncertainty dates back to Savage (1954), while its modern study began with the behavioral paradoxes of Ellsberg (1961) and the theoretical analysis of Schmeidler (1989), with some key insights going back to Keynes (1921Keynes ( , 1936 and Knight (1921). Following the decomposition of climate change uncertainty according to its sources, proposed by Heal and Millner (2014), the decision problem faced by the policy maker must be taken in the presence of both a scientific and a socio-economic component. The next two examples illustrate this.
Scientific uncertainty A first source of uncertainty comes from the science of climate. While most scientists agree that climate change is a reality and that humans are primarily responsible for the unprecedented changes in global temperature that have occurred for several decades (Hansen et al. 2006;IPCC 2013), the exact relationship between anthropogenic emissions of GHG into the atmosphere and climate change remains uncertain. Relying on the available observations and the current scientific state of knowledge, the scientific community has tried to construct precise climate models to predict and quantify the impact of human activity on global temperatures. Different metrics have been proposed to measure the global temperature response to increases in atmospheric emissions or concentrations. Knowledge about the physical laws governing the climate system permits to narrow the scope of possible interactions between emissions and temperature increases. Yet, a large degree of uncertainty still surrounds the estimates of these constructed climate metrics. For example, different scientific models typically provide different probabilistic assessments of the value of some key climate parameters (Meinshausen et al. 2009;IPCC 2013;Millner et al. 2013). As a consequence, still unknown with certainty, for example, is how much the global climate will exactly respond to changes in future atmospheric conditions, or the precise timing at which this change will occur.
The carbon-climate response (CCR) is an intuitive metric that Matthews et al. (2009) recently proposed to provide policy-relevant information about the allowable level of emissions for a given temperature target. As Fig. 1 illustrates this metric synthesizes the global temperature response to anthropogenic emissions. Formally, the CCR refers to the ratio of temperature change to cumulative carbon emissions. It aggregates the well-known concept of climate sensitivity (the temperature change in response to increased atmospheric CO 2 ) and of 'carbon sensitivity' (the increase in atmospheric CO 2 concentrations resulting from CO 2 emissions as mediated by natural carbon sinks), while accounting for climate carbon cycle feedbacks. The CCR is claimed to be directly policy relevant, especially for climate change mitigation decisions. It combines the uncertainties related to climate sensitivity, carbon sinks and climate-carbon feedbacks into a single metric.
According to available historical data and observations, the CCR belongs to the interval [1.0-2.1 • C ] per trillion tones of carbon (TtC) for the period 1990-1999, with a best-guess estimate of 1.5 • C per TtC (Matthews et al. 2009). Figure 2 illustrates the observational estimates of CCR. As the figure shows, even when data about emissions and temperature changes are available, the exact relationship between the two cannot be established with certainty. We posit a stochastic linear relationship between emissions E and temperature increases T given by Here, T is a structural CCR parameter and T is a shock, a random component that, as previously mentioned, represents the unexplained variation caused by-possibly manyminor explanatory variables that decision makers are "unable and unwilling to specify" (Marschak 1953). The value of the CCR parameter is remarkably constant within a given climate model, though it may vary across models from differences in the way climate and 1 3 carbon sensitivities are integrated. Figure 3 reports the results of the estimated CCR from 11 coupled climate-carbon cycle models participating in a model inter-comparison project (Friedlingstein et al. 2006). The figure presents the global temperature change as a function of cumulative carbon emissions. The relationship is remarkably linear for almost all the different models, thus justifying the linear form posited in (3). In this case, the CCR parameter T for each model is nothing but the slope of the line that represents the intrinsic value of temperature change per unit of carbon emitted. Table 1 presents the value of the CCR for each model. As the table shows, model-based estimates of CCR range from 1.0 to 2.1 • C per TtC. 8 Note, however, that the uncertainty about the correct linear model (3), or about the CCR parameter, is epistemic. Within each such specification, there is still a layer of risk via the random component T .
Socio-economic uncertainty The second source of uncertainty for policy makers in the context of climate change is the relationship between global temperature increases and economic impacts. In an ideal world, physical and economic sciences would provide a  theoretical underpinning for such a relationship. In reality, the economic impact of global warming is complex and difficult to predict (Pindyck 2007). In the language of climate change economics, there is little information about the damage function d that represents the relationship between an increase in temperature T and the economic damage D or loss (usually expressed as a fraction of GDP, see Pindyck 2013aPindyck , 2015. In other words, there is no economic or physical theory to help assess the "correct" functional form of this relationship. Moreover, because climate change mainly pertains to events that have never been encountered before, little data or empirical information can be used to assess both the degree of steepness of the damage function and the point at which steepness begins. Traditionally, integrated models of climate have dealt with this problem by using arbitrary functions to describe how GDP decreases when temperature increases. These functions, which rely on strong assumptions and have been subject to substantial criticism (Pindyck 2013a;Howard et al. 2014;Howard 2014), constitute the best approximation that policy makers have at their disposal. A quadratic damage function used in literature is It appears, for example, in the DICE model. 9 The standard approach to calibrate this function is to concentrate on the domain of relatively small increases in temperature, which is the only domain for which we have some information at our disposal. In the past few years, several studies have attempted to assess the impacts of global warming (or equivalently, the benefits from reducing GHG emissions). 10 These studies have used different methods, including expert elicitation, enumeration, and statistical methods, and have assessed different warming scenarios (usually within the range 1.0-3.0 • C warming). The results of these studies, summarized in Table SM10-1 of IPCC (2014a), currently represent the best information available on the potential impacts of climate change. The lack of theoretical or empirical foundations for the damage function and its "correct" functional form does not matter much when considering relatively small temperature increases, as there is wide consensus that damages will be low at these levels. Yet this is no longer the case for higher increases in temperature, which are associated with much stronger degrees of uncertainty. For example, scientists have almost no idea of what damage to expect if temperature increases reach + 5 • C relative to preindustrial levels. Considering a temperature increase of T = 5 • C in (4) may therefore be misleading when analyzing climate policy, given that calibration has been realized with data limited to small fluctuations in temperature occurring over relatively short periods (Pindyck 2013a).
In a recent contribution, Drouet et al. (2015) summarize the information on total damages from global warming coming from the latest IPCC report. They use 20 estimates of total economic effects of climate change to fit three different probabilistic damage functions. Figure 4 illustrates the results. By aggregating the specifications of Drouet et al. (2015), we have a damage function with the following expression: 9 DICE stands for Dynamic Integrated Climate and Economy (Nordhaus 1993;Nordhaus and Sztorc 2013). The damage function presented in Eq. (4) is the one in the DICE code in Nordhaus and Sztorc (2013). It is a slight variant of the version of the quadratic form presented in the theoretical description of DICE, in which climate damages are bounded to 100% (i.e., climate change is assumed to only reduce current income, but may not destroy pre-existing assets). At low temperature increases, the two versions are virtually identical. 10 For an overview of these studies, see Pindyck (2013a), and Heal and Millner (2014). We consider three specifications of parameters. First, when 3D = 4D = 0 , the damage function has a quadratic form (first column of the figure) analogous to the one used in DICE and in most integrated assessment models. Second, when 1D = 2D = 3D = 0 and 4D = 1 (second column of the figure), we obtain a probabilistic version of the exponential damage function proposed by Weitzman (2009). This functional form is clearly steeper. It excludes the possibility of potential benefits from climate change and allows for higher damages when temperature increases reach 4 • C and 5 • C . Third, when 1D = 4D = 0 (third column of the figure), the damage function has the sextic form proposed by Weitzman (2012). According to this specification, high temperature increases are disastrous.
To illustrate the difference among the possible specifications of the damage function, consider Table 2. It presents the mean damages (and the 5th-95th percentiles) associated with global warming expressed as percentages of world GDP and obtained under the  previous three specifications of the damage function. The first row presents the results if temperature increases above preindustrial level reach + 2 • C . This is the threshold that 195 countries have agreed to struggle for at the COP21. The second row presents the possible economic damages if the temperature increases reach + 3 • C . This level of warming roughly corresponds to the median 2100 temperature increase projection if nationally determined contributions (i.e., climate pledges that each country made to tackle the problem of climate change) are implemented as planned (see Bosetti et al. 2017). 11 Finally, the last two rows pertain to the more extreme temperature increases of + 4 • C and + 5 • C . These levels of warming roughly determine the bounds of the temperature changes that could be expected under a business-as-usual situation (i.e., if no additional efforts are made to constrain emissions; see IPCC 2014b).
Importantly, these three damage functions do not have a clear theoretical underpinning. They just fit the best data currently available on potential losses using different specifications. The random component D accounts for shocks. There is, therefore, a high degree of structural uncertainty about the "correctness" of the functional form representing this relationship. This uncertainty is of an epistemic nature: the policy maker does not know which is the most accurate model to describe the relationship between global warming and GDP among the three potential models proposed by economists. The probability that may be attached to each model is therefore a representation of the policy maker's degree of belief. Yet, a layer of risk is also present within each model via the term D .

Decision Under Uncertainty
We now go back to the decision problem facing our policy maker, who must choose the action-represented by the level of emissions, i.e., a = E-knowing that it will affect global temperatures via the carbon-climate Eq. (3), which in turn will affect the economic output through a damage function.
To simplify the computation, we assume that damages have the quadratic form (4). As we argued in the previous section, the relevant scientific and socio-economic relationships can be summarized by the following nonlinear system: States have both random and structural components, so they have the form In our decision problem, the vector = ( T , D ) represents the pair of random shocks affecting the climate and economic systems, while the vector = ( T , D ) specifies the structural coefficients, parametrizing through T a model climate system and through D a model economy. By substitution from the system (6), the damage function has the form 11 Models used for projections of future temperature increases are those whose results on transient climate response are reported in the IPCC fifth assessment report. The hypothesis that current nationally determined contributions are projected beyond 2030 is made here for these projections. See Bosetti et al. (2017) for more details.
For each policy a, the consequence function specifies the overall monetary outcome in terms of some economic variable of intereste.g., consumption or GDP-given the random components and the monetary cost C(a) of the policy itself.
We assume that scientific and socio-economic information enables the policy maker to posit a set of potential models M describing the likelihoods of the different states. This set of models is taken as a datum of the decision problem: the policy maker behaves as if she knows that states are generated by a probability model m that belongs to the collection M. We thus abstract from model misspecification, which magnifies the issues we discuss.
Shocks where q ( ) is the probability of and is the probability distribution concentrated on . 12 Each model m corresponds to a shock distribution q parametrized by and to a model climate system/economy parametrized by . Uncertainty on reflects a theoretical uncertainty about the correct specification of the consequence function. Uncertainty on is about the statistical adequacy of a such economic specification due to shocks. With this, we write the consequence function as (a, ) to emphasize the structural component over the random one . Moreover, we index models as and denote by M = m , the set of models that the policy maker posits.
To address the uncertainty across models, the policy maker has a subjective prior probability distribution that quantifies beliefs about the correct model. In particular, (m) is the policy maker subjective belief that m is the correct model. Because of the factorization, this belief is actually over the values of and , so has the form This is the policy maker subjective belief that parametrizes the correct model climate system/economy and that is the correct vector of shocks' standard deviations. Now that we have introduced all the elements of the decision problem under uncertainty, we turn to the way they can be combined to make the best possible decision. For this purpose, we describe different decision criteria developed in economic theory that can be used for problems of decision under uncertainty.

Classical Subjective Expected Utility
We begin with the description of the decision criterion that, for several decades, has been the standard way to consider rational decision making under uncertainty. This criterion dates back to the seminal works of von Neumann and Morgenstern (1947), Wald (1950), Savage (1954) and Marschak and Radner (1972). It has recently been revisited by Cerreia-Vioglio et al. (2013) to accommodate explicitly the presence of model uncertainty.
We consider a classical decision problem (A, S, C, , M, ≿) , which is a decision problem, as defined in Sect. 2, enriched with a set M of models that the decison maker posits. Assume that a von Neumann-Morgernstern utility function u ∶ C → ℝ represents ≿ and thus translates economic consequences, measured in monetary terms, into utility levels. As well-known, this function captures risk attitudes (i.e., attitudes toward aleatory uncertainty). For each model m , , we can compute the expected reward of a given action: For example, under risk neutrality we have Given that different models exist and that the policy maker does not know which is the correct one, she considers the expected payoff of each possible model and aggregates them out through a weighted average according to her prior probability . The classical subjective expected utility decision criterion is The optimal policy is therefore the action â that maximizes this criterion. 13 Formally, this means solving the optimization problem (2), which here takes the form ( , ).

R(a, , ) =
13 To ease matters, we restrict our attention to finite state and model spaces. Integrals with respect to probability density functions would arise without such assumption.
Optimal actions depend on the policy maker's preferences via the utility function u and the prior probability . For instance, if the prior distribution is uniform, criterion V eu consists of an average of the expected rewards, where all the models are equally weighted. The optimal policy maximizes such average expected reward. Criterion (8) is a Bayesian two-stage criterion that describes both layers of uncertainty, risk and model uncertainty, through standard probability measures. Following Savage (1954), we can write this as a single-stage criterion: is the so-called predictive distribution on states. In our example, it would correspond to the model that features the mean CCR and the mean damage under a uniform prior. The equality between expressions (8) and (10) indicates the following: considering a collection of models M and aggregating them is equivalent to considering a unique average model. This is possible because the policy maker has the same attitude towards risk and model uncertainty (see Marinacci 2015). To show this, we rewrite criterion (8) as The outer u represents attitude toward model uncertainty and the inner u-entering the monetary certainty equivalent c(a, , ) = u −1 (R(a, , ))-represents risk attitude. In this sense, criterion (8) overshadows the policy maker's reaction to the variability that may exists across models. Considering equally the different CCRs or a single CCR = 1.6 on which everyone would agree, for example, leads exactly to the same optimal emission policy. Indeed, these different scenarios reduce to the same predictive model m . Criterion (8) therefore presupposes that the policy maker has the same attitude toward aleatory and epistemic uncertainty.
Two special cases of criterion (8) are noteworthy. First, suppose that the policy maker considers (possibly wrongly) a single pair ( , ) to be the correct one; formally, ( , ) = 1 . The two-stage criterion (8) then reduces to V eu (a) = R(a, , ) . In this case, model uncertainty is still part of the decision problem, but the policy maker is dogmatic about a specific model being the correct one and therefore does not take into account any other model. Second, suppose that the collection × is a singleton, with a unique element ( , ) ; for example, shocks have a prespecified distribution and there is no scientific uncertainty about the value of the CCR parameter or economic uncertainty about the correct damage function. In this case, the policy maker knows that the pair ( , ) is correct. Epistemic uncertainty is no longer present in the decision problem, which is a decision problem under risk-as represented by model m , . Criterion (8 ) reduces to V eu (a) = R(a, , ) interpreted as a von Neumann-Morgenstern risk criterion. This is typically what is implied by the rational expectations hypothesis often adopted in economics, which assumes that policy makers know the correct model. We now move beyond the classical subjective expected utility criterion (8) and discuss alternative decision criteria under uncertainty. 14 Before doing this, we close with an especially tractable version of criterion (8) in which, with an abuse of notation, beliefs have the separable form In this case, the criterion is easily seen to take the form where q( ) = ∑ q ( ) ( ) and R (a, ) = ∑ u � (a, ) �q ( ) . If q has a prespecified distribution, models are now indexed only by and we can write where c(a, ) = u −1 R (a, ) . A similar analysis can be carried out for the other criteria that we will present, which simplify accordingly. Yet, for brevity we will omit details.

Unanimity Preferences
One way to deal with uncertainty is to allow preferences to be incomplete. Because of the lack of knowledge about both the science of climate and the impact of climate change on the economy, the policy maker might not be able to rank some pairs of alternative actions. If this is the case, preference ≿ is no longer complete (as so far assumed). Assume, following the classical analysis of Bewley (2002), that the policy maker knows her tastes and is able to rank consequences through a utility function u, yet is unable to rank some pairs of actions because of insufficient information about them. Because of its incompleteness, preference ≿ cannot be represented by a numerical decision criterion V, but only by a nonnumerical unanimity rule: In other words, action a is preferred to another action a ′ if and only if, according to all the probability models m , , the expected reward associated with action a is higher than that associated with action a ′ . 15 In our emission example, this would be the case if and only if policy a is better than policy a ′ according to all the different climate/economy models.
An unanimity criterion is often unable to specify what the policy maker should do. This is the case, for example, if a policy (e.g., a low level of emission policy) performs better than another policy (e.g., a high level of emissions policy) according to some models, but performs worse according to other models. Nevertheless, if a decision must be made, a policy maker needs to "complete" the criterion when it remains silent. A possibility is to adopt (11) ( , ) = ( ) ( ).
a default decision rule that relies on a status quo action that remains the default action until it is replaced by an alternative action that is unanimously better. In climate change economics, this status quo action has typically consisted of the "wait-and-see" policy, and uncertainty has long been considered an excuse for inaction in climate policy. Other approaches suggest to complete preferences with one of the criteria that we present subsequently (Gilboa et al. 2010;Cerreia-Vioglio 2016), to take care of the burden of choice in a less ad hoc manner than status quo. In summary, the unanimous criterion (13) may turn out to be useless in situations in which a choice must be made. In the next sections, we present alternative criteria that preserve completeness (so, the numerical nature of the decision criterion). In particular, we relax the assumption of classical subjective expected utility theory that the same function u captures both risk and model uncertainty attitudes. Indeed, in principle there is no reason to expect these two attitudes to be equal: a policy maker might well be more prone to face risk due to the intrinsic randomness of some events than to face model uncertainty due to a lack of, for example, scientific knowledge. Recent experimental evidence on both students and real-life policy makers shows that this is indeed the case (Berger and Bosetti 2020). A policy maker fulfilling this condition will be more averse to model uncertainty than to risk and consequently will exhibit uncertainty (or ambiguity) aversion. This latter behavioral characteristic, first highlighted by Ellsberg (1961), robustly describes the behavior of individuals in situations of uncertainty.

Classical Maxmin Analysis
The maxmin criterion of Wald (1950) is a first decision criterion that considers attitudes toward risk and model uncertainty differently. This criterion is extremely cautious because it makes the policy maker to consider only the model providing the lowest expected reward.
In our examples, this means that only the "worst" out of all possible climate/economy models is considered when choosing the optimal climate policy. Prior probabilities do not play any role here, so we are in a classical statistics setting. Formally, the maxmin decision criterion is Choosing the optimal policy under this criterion corresponds to finding the value of a that maximizes the minimal expected reward obtained over the set of possible probability models (this is why this criterion is called "maxmin"). Recently, Rezai and van der Ploeg (2017) used this criterion to examine the impact of scientific uncertainty about the "correct" climate models to be used in the context of integrated assessment models.

Bayesian Analysis
Another way to distinguish attitudes to model uncertainty and risk is to adopt the smooth ambiguity decision criterion developed by Klibanoff et al. (2005), where ≡ v•u −1 represents the attitude toward uncertainty that results from the combination of attitudes toward model uncertainty v and risk u. Concavity of reflects uncertainty aversion that, in this setup, amounts to a stronger aversion to model uncertainty than to risk (i.e., v is more concave than u, see Marinacci 2015).
Similar to (8), the smooth ambiguity criterion is also a two-stage Bayesian criterion in which both layers of uncertainty are described by standard probability measures. It can be written as and interpreted as follows: in the first stage, the policy maker evaluates the expected reward of policy a per each possible model m , and expresses it in monetary terms through a certainty equivalent These certainty equivalents represent the amount of the economic variable of interest, like GDP or consumption, that would make the policy maker indifferent between obtaining such amount for sure and facing the risk that model ( , ) involves. A certainty equivalent can be computed for each model. It depends on risk attitude via the function u: the more risk averse the policy maker is, the lower the certainty equivalent is. In the second stage, the policy maker addresses model uncertainty, the decision theoretic scope of which is described by the collection of certainty equivalents. The policy maker summarizes the value of policy a by evaluating an overall expected reward across the certainty equivalents depending on her attitude toward model uncertainty v and prior belief . This is exactly what represents the two-stage decision criterion (15).
As before, if the prior distribution is uniform, the certainty equivalents are given the same weight in computing the overall expected welfare. If model uncertainty in the second stage is evaluated using risk attitude u, so that u = v , we are back to the classical subjective expected utility criterion (8), which corresponds to a situation of ambiguity neutrality (function is linear in this case). Note that, the maxmin criterion (14) is a limit case of the smooth decision criterion (15) when model uncertainty tends to infinity. For example, if (x) = −e − x , we have 16 c(a, , ) = u −1 (R(a, , )).
which reduces to (14) when has full support. As here uncertainty aversion results from stronger aversion to model uncertainty than to risk, it should be clear that the classical maxmin criterion corresponds to an extreme aversion to model uncertainty relative to risk. In a recent contribution, Berger et al. (2017) explicitly made the distinction between attitudes toward risk and model uncertainty while using the smooth criterion to examine the impact of scientific uncertainty about the possibility of a particular climate catastrophe on the optimal level of GHG emissions. Another example of application of this criterion in climate change economics is presented in Millner et al. (2013).

Non-Bayesian Analysis
We already performed non-Bayesian analysis when presenting Wald's maxmin criterion, in which priors play no role. Yet, a different departure from the Bayesian framework originates in the work of Gilboa and Schmeidler (1989).
Multiple priors The multiple priors approach of Gilboa and Schmeidler (1989) relaxes the assumption that the policy maker's information about model uncertainty is quantified through a single probability distribution . Instead, it allows for the possibility that it is quantified by a set C, because the policy maker does not have sufficient information to specify a single prior over the different models. The multiple priors decision criterion is In contrast with Wald's extreme criterion-with which it is sometimes confused-the multiple priors criterion of Gilboa and Schmeidler (1989) considers the least favorable among all the classical subjective expected utilities determined by each prior in C. In our climate policy example, a particular prior distribution may be the uniform one, which gives equal weights to all the possible models, while another prior may not consider some values of the CCR as plausible (in which case, some ( , ) have a value 0). The classical subjective expected utilities are then computed for each prior distribution, and the optimal policy is the one that maximizes the expected reward obtained with the "worst prior".
Criterion (16) has also often been called the maxmin criterion, but it is less extreme than it may appear at a first glance. The set C of possible priors incorporates both the attitude toward uncertainty and an information component: a smaller set C may reflect both better information and/or less uncertainty aversion. In any case, Ghirardato et al. (2004) axiomatize a more general -version that may accommodate milder, even positive, attitudes toward uncertainty.
Two criteria that we have already encountered are special cases of the multiple priors model. First, the classical subjective expected utility criterion (8) is recovered when the set C is singleton (i.e., it contains only one element). Second, we return to Wald's maxmin criterion (14) when the set C is maximal in that it consists of the set ( × ) of all possible prior probabilities. Indeed, we have So, Wald's maxmin can be interpreted as the extreme case of maximal "prior uncertainty".
Robustness A more general criterion, known as the variational decision criterion, has been axiomatized by Maccheroni et al. (2006). It is written as Under this criterion, priors are weighted by a convex function c. Importantly, if c is strictly convex, the criterion V vr becomes differentiable.
This criterion has a penalization form familiar from robust control theory. In particular, if c is dichotomous (i.e., it is 0 if belongs to some set C and +∞ otherwise), we are back to the multiple priors criterion (16). By contrast, if c has the relative entropy form −1 H( ∥ ) with respect to a reference prior and a coefficient > 0 , we have the multiplier decision criterion of Hansen andSargent (2001, 2008). Because of a convex analysis equality (Dupuis and Ellis 1997, p. 27), the multiplier decision criterion can be equivalently written in the smooth ambiguity form Indeed, as Hansen and Sargent (2007) and Cerreia-Vioglio et al. (2011) note, the multiplier decision criterion is, essentially, the intersection of smooth ambiguity averse and variational representations.  Examples of applications in climate change economics of the multiplier decision criterion are in Athanassoglou and Xepapadeas (2012), Xepapadeas and Yannacopoulos (2017), and Rudik (2020).

Other Approaches
The approaches discussed so far have a normative motivation. They assume that policy makers must cope with uncertainty without expecting to reduce everything to risk, a pretension that tacitly presumes better information than they typically have. Making decisions under a fictitious, even delusional state of information seems hardly a rational way to proceed. That said, research has proposed other approaches with a descriptive motivation. These include, for example, prospect theory (see Wakker 2010). However, their descriptive motivation makes them less relevant for the climate policy problem that we consider.
Finally, another criterion known as minmax regret, due to Savage (1951), is also sometimes used in the environmental literature. Because it violates the independence of irrelevant alternatives, a basic rationality tenet, we do not discuss this criterion here and refer interested readers to Marinacci (2015).
In summary, Fig. 5 illustrates the numerical decision criteria that we discussed.

Application
To illustrate the different optimal climate policies prescribed by the decision criteria that we presented, again consider the example of our policy maker who wants to choose the optimal mitigation policy to put in place. For the sake of simplicity, we assume that the objective is to choose the level of GHG emissions that maximizes the net level of output of the economy. This level simply corresponds to the level of output net of the damages due to climate change and the costs necessary to reduce emissions (the so-called abatement costs). Because of the presence of scientific and socio-economic uncertainties, the net  level of output is itself uncertain. 17 Given the best available scientific and economic information at her disposal, the policy maker knows that 33 climate/economy "models" may potentially describe the impact of climate change on the economy. These models come from the combination of the 11 CCR values describing the possible relationship between GHG emissions and temperatures and the three different relationships between temperature increases and economic damages (see Sect. 2.2). Each model contains an aleatory component. Assuming that beliefs have the separable form (11), we can index these 33 models by only. It is then possible to compute the expected reward R (a, ) associated with any emission policy a. 18 We can then express these 33 reward functions in monetary terms by computing the certainty equivalents c(a, ) = u −1 R (a, ) . Figure 6 plots these certainty equivalents.
The certainty equivalents represent, for each of the 33 models and each level of cumulative anthropogenic emissions, the certain amount of net output that a policy maker deems worth as much as the risky net output. In other words, each certainty equivalent represents a measure of net output that integrates the attitude toward the aleatory part of uncertainty. The policy maker, however, does not know which is the correct certainty equivalent. The certainty equivalent is, in that sense, itself uncertain because it depends on the values of the different structural parameters used. There are thus 33 potential certainty equivalents, depending on whether the damage function is quadratic (first column), exponential (second column) or sextic (third column) and also on the value of the CCR parameter (represented by different colors in Fig. 6). For each particular model representing the impact of climate policy on economic output, it is possible to determine the optimal action to put in place. To do so, the policy maker needs to find the level of cumulative emissions maximizing the certainty equivalents (represented by the dashed vertical lines). These optimal levels of cumulative GHG emissions since preindustrial levels range from 1.54 to 2.26 TtC, depending on the model considered. Unsurprisingly, lower CCR parameters induce higher optimal levels of emissions, while the use of a sextic damage function to characterize the impact of climate change tends to favor lower emission policies.
As the ranking of the certainty equivalents is the same as that of the expected utilities, 19 we can first analyze the results of Fig. 6 in the light of the unanimity criterion. As we show, up to the level of 1.54 TtC any given level of cumulative emissions is unanimously better than any other inferior level of emissions. To see this, consider, for example, the climate policy geared to 1.5 TtC of cumulative emissions. For any of the 33 models presented in Fig. 6, we find that this policy dominates (i.e., it leads to a higher level of certainty equivalent) any other policy with a lower level of cumulative emissions (e.g., 1 TtC). Analogously, for emission levels superior to 2.26 TtC, a policy geared to a specific level of cumulative emissions is always unanimously dominated by any policy geared to a lower level. For levels of cumulative emissions between these 17 The gross level of output and the abatement costs are also potentially uncertain. However, because we focus on the type of uncertainty described previously, here we do not consider these additional sources of uncertainty. 18 In this example, the consequence function is simply the net output computed as (a, , ) = 1+D(a, , ) , where Y gross is the gross output, D(a, , ) represents the damages associated with climate change, and C(a) is the abatement cost. Both damages and costs depend on the action taken (a represents the level of GHG emissions). The abatement cost function is assumed to be nearly cubic as in Nordhaus and Sztorc (2013). The von Neumann-Morgernstern utility function u used is a power function, with a constant relative risk aversion coefficient of 1.5. 19 A certainty equivalent is nothing but a monotonic transformation of an expected utility. thresholds 1.54-2.26 TtC, however, it is impossible to find any policy satisfying the unanimity condition. The incompleteness of unanimity preferences therefore prevents any decision to be taken in such a situation, so another decision criterion needs to be followed in order to make a decision.
If the policy maker decides to behave in a very pessimistic way by taking into account only the model providing the lowest expected reward, she only considers the combination CCR = 2.1/sextic damage and fixes the level of cumulative emissions to 1.54 TtC. This policy maker is extremely uncertainty averse, and uses Wald's maxmin criterion (14), illustrated in black in Fig. 7. Alternatively, if the policy maker considers aleatory and model uncertainty the same way, she aggregates the expected rewards by taking a weighted average over them, in which the weights represent the degree of belief in each specific model. In practice, this means that an overall certainty equivalent aggregating the different certainty equivalents associated with each specific model can be computed. This overall certainty equivalent incorporates the policy maker's attitude toward model uncertainty exactly in the same way as it incorporates her attitude toward risk. The overall certainty equivalent under a uniform prior over the possible models-( ) = 1∕33 for all -is represented in blue in Fig. 7. The decision criterion in this case is the classical subjective expected utility   (SEU) criterion (8). The optimal decision is a cumulative level of emissions of 1.85 TtC since preindustrial levels, which corresponds to the solution of problem (9). Instead, if the policy maker is averse to uncertainty and thus dislikes more epistemic uncertainty than risk, but is not as pessimistic as the classical maxmin criterion presupposes, she may compute an overall certainty equivalent by means of a function v-more concave than function u-representing her attitude toward model uncertainty. An example of such overall certainty equivalent is represented in red in Fig. 7. 20 In this case, the decision criterion is the smooth one (15), and the optimal level of cumulative emissions is lower than under expected utility. It approximately corresponds to 1.81 TtC since preindustrial levels.
Finally, following the multiple priors approach (16), the policy maker considers different probability measures over the models, computes the expected utility for each of them, and considers only the one providing the lowest level of expected reward. An example of such an overall certainty equivalent is represented in purple in Fig. 7. It represents the minimum expected reward obtained for two distinct priors: the prior, in which all the 33 models are weighted equally, and the prior that considers the lower values of CCR as implausible (and therefore puts a weight 0 on them and a uniform prior over the remaining models). The optimal level of cumulative emissions under the multiple priors model in this situation is lower than under the expected utility one. It corresponds to 1.77 TtC since preindustrial levels. Table 3 summarizes the optimal decisions for each of these criteria.

Discussion
While the application we presented in the previous section may be too simplistic to be actually used by policy makers to design climate policies, it illustrates the differences between alternative decision criteria that can be used in the presence of uncertainty.
When making a choice in the presence of uncertainty, which criterion should policy makers adopt? As previously argued, the standard line of reasoning that has traditionally been followed in climate policy is that, to make coherent choices, policy makers should use either von Neumann-Morgenstern expected utility if they know the true model or Savage's subjective expected utility with respect to their subjective probabilities over alternative models. In the context of our example, this approach implicitly assumes that if model uncertainty is present, policy makers treat it in the same way as they treat risk, so they will use the "predictive" model (10). For a long time, expected utility theory has been viewed as the only convincing approach to make rational choices under uncertainty. Following this idea, Broome (2012, p. 129) writes that, in the context of policy decisions regarding global warming: The lack of firm probabilities is not a reason to give up expected value theory. You might despair and adopt some other way of coping with uncertainty; you might adopt some version of the precautionary principle, say. That would be a mistake. Stick with expected value theory, since it is very well founded, and do your best with probabilities and values.
While the axiomatic foundations of the expected utility approach appear compelling at first glance, the claim that they constitute a necessary condition for rationality in decision making has, however, been challenged at least since Ellsberg (1961). For example, Gilboa et al. (2008Gilboa et al. ( , 2009Gilboa et al. ( , 2012 and Gilboa and Marinacci (2013) argue that behaving in accordance with Savage's axioms raises several difficulties and that relaxing the assumption that policy makers are expected utility maximizers might well be rational. It does not mean that policy makers are unable to think probabilistically or fail to compute probabilities correctly, but rather that they acknowledge that expected utility requires more information than they actually have, so its use would require some arbitrary assumptions that supplement the limited information.
The decision frameworks we present herein are consistent with such an interpretation of rationality, so they are compatible with a normative assessment of optimal climate policies. In a context in which a variety of alternative models exist-each implying a different stochastic forecast-but in which information about the accuracy of each is limited, such alternative decision frameworks may prove to be desirable. Indeed, the decision adopted is robust in that the selected action does reasonably well across a range of models (Mukerji 2009). This property seems particularly valuable when the consequences of the actions taken have long-lasting and global impacts, as in the case of climate change.
When being asked to take actions under uncertainty, policy makers might well believe that it is more rational to use one of these alternative criteria than to follow a standard expected utility approach. All too often, uncertainty has been used as an excuse for insufficient action in climate policy making. The most important-but potentially most difficult-thing to do is to acknowledge that there are things that are just not known and then to act nevertheless.