This action might not be possible to undo. Are you sure you want to continue?
California Institute of Technology, Pasadena, CA and Ecole Polytechnique Fédérale Lausanne, Switzerland
Human decision making under uncertainty is the outcome of complex neuro‐ physiological processes involving, among others, constant re‐evaluation of the statistics of the problem at hand, balancing of the various emotional aspects, and computation of the very value signals that are at the core of modern economic thinking. The evidence suggests that emotions play a crucial supporting role in the mathematical computations needed for reasoned choice, rather than interfering with it, even if emotions (and their mathematical counterparts) may not always be balanced appropriately. Decision neuroscience can be expected in the near future to provide a number of effective tools for improved financial decision making. 1. Introduction Finance studies decision making under uncertainty. For a long time, this study has been normative, addressing the question: how should one choose when faced with uncertain outcomes? This has led to the development of utility theory, and more specifically, expected utility theory and mean‐variance analysis. The latter has proven to be most helpful when multiple options can be chosen at one time (portfolio analysis). Normative decision analysis still dominates asset pricing theory, based on the idea that there is a marginal investor (or an aggregate investor) and that somehow she must be doing things right. From experiments, however, we know that this is far from a foregone conclusion. For instance, in situations of ambiguity (some of the outcome probabilities are unknown), the irrational investor may determine prices even if he is clearly not the marginal investor (Bossaerts et al 2008a). At the individual level, normative analysis has long been known to make invalid predictions. Ever since (Allais 1953), choices have been consistently found to violate the two foundational axioms of expected utility, namely, the independence axiom and the sure‐thing principle. This has led to the emergence of behavioral finance.
Behavioral finance also starts from choice, but rejects the idea that it reflects maximization of a rational (expected) utility function. Nevertheless, it has one important feature in common with normative finance: choices still are interpreted “as if” they maximize some utility. Even the choice of utility profile is suggestive of nostalgia toward expected utility: there are underlying states of nature, and utility is obtained as the summation over these states, of state probabilities (perhaps weighted) and utilities of state‐specific outcomes. Indeed, prospect theory (Kahneman & Tversky 1979) can be viewed as an attempt to characterize actual human choice [and recently also non‐human primate choice (Chen et al 2006)] within the straightjacket of normative theory. A reference point needed to be introduced, probabilities needed to be biased in specific ways, and the idea of decreasing marginal utility had to be abandoned for the loss domain. But is it really true that choice is the outcome of a utility maximization process? Take risk seeking for losses, for instance. What if this is a bias that comes about because of the way decisions are made, and not really the result of increasing marginal (hedonic) utility for outcomes below the reference point? Ostensibly innocuous consequences follow from practical implementation of the idea that choice under uncertainty reflects maximization of (prospect theory) utility. E.g., wealth managers should encourage their clients to buy “structured products” that implement short positions in put options. But how could we know where choice really comes from? It is here that neuroscience comes in. Neuroscience allows us to study the entire process of decision making, from initial perception of a “stimulus” (which conveys new information and/or new investment options), to valuation and motivation, and the very act of choosing. When looking for inspiration in neuroscience, it is important from the outset to realize that neuroscience is a sub‐field of biology, and hence, that it uses an over‐ arching principle to understand neurobiological phenomena (analogous to absence of arbitrage or the efficient markets hypothesis of neoclassical finance), namely, evolutionary fitness. Specifically, the brain is viewed as an organ that is optimized, through evolutionary pressure, to do things right in its natural environment. There is ample evidence for this in sensory processing in the brain. E.g., auditory signals are optimally mixed with visual signals to generate accurate guesses of the localization of the source of the signals (Shams et al 2005). “Optimal” means: in accordance with Bayes’ law. When choice under uncertainty is concerned, however, the types of risks that the brain would be optimized for may be very different from risks routinely encountered in the financial sphere. Indeed, while many ecologically relevant risks (e.g., temperature changes) can readily be described as Gaussian and predictable, returns generated through organized financial markets are distinctly leptokurtotic and unpredictable. So, it makes sense to ask whether there is a mismatch between the brain and financial markets. But we cannot tell
Lo & Repin 2002). In decision theory. effectively bypassing the vast and sophisticated literature in decision neuroscience. a financial market without insiders). All too often these reviews were not really surveys but “critical assessments” – from both proponents and opponents of the field – based on only a few articles with a distinct “economics” angle. and questions raised may have been answered. Still. in contrast. some claims made here will be outdated. as we shall see later. The goal in this review. a number of reviews have recently appeared.” or. theories and results from economic theory and experimental economics. 2. a financial market with insiders).as long as we don’t know how decisions come about in the brain. If finance is to be enlightened by decision neuroscience. That is one of the tasks of an area of neuroscience called decision neuroscience. While this sometimes involves field experiments (Coates & Herbert 2008. following the discovery of the role of dopamine neurons in prediction under uncertainty (Schultz et al 1997). one is immediately confronted with an inspiring differentiation. Experimental Finance Virtually all empirical work in finance uses historical data from the field. so it is pertinent to elaborate. primarily to emphasize that neuroscience had much to say about decision making even before the advent of “neuroeconomics. is not only to let the economist talk.” a random number generator. data as they are generated by “naturally occurring (financial) markets. “neurofinance. when it concerns risk and uncertainty. Decision neuroscience has progressed dramatically over the last ten years. common regions do get involved. i. When exploring decision neuroscience. where controlled experimentation is the norm. and differences across humans in attitudes towards either source. the separation is not extreme.e. all risks are the same – there is no distinction by source. and that generated by an intentional source (a strategic opponent. but also the decision neuroscientist. The speed of discovery has been growing over the last five years. the behavioral effects when lesions selectively affect these regions..” On the matter of neuroeconomics. Yet the brain evidently distinguishes between (at least) two types: uncertainty generated by an unintentional source (“nature. Hence some would refer to decision neuroscience as “neuroeconomics. 3 . it needs to appreciate the merits of controlled experimentation. The differentiation manifests itself in the engagement of different functional regions of the brain. leaving intentional uncertainty for a later section. We will mostly deal here with uncertainty from an unintentional source. The pick‐up in speed is in large part due to the introduction of concepts. This is far from a foregone conclusion. By the time this review is published.” I would like to refrain from using these terms here. so that surveying the accomplishments becomes like describing a moving target. it mostly means laboratory experiments.” This is in contrast with decision neuroscience.
one of the key brain regions that tracks reward prediction errors has recently been demonstrated to encode rewards adaptively: when the high reward of a binary gamble is delivered. Indeed. 1. While this was first demonstrated for the monkey brain (Tremblay & Schultz 1999). which has been shown to encode utility (more on that later). Indeed. while adaptive encoding has been observed for the ranges of outcomes and risks in the laboratory. From the point of view of the reward prediction system in the brain. this has recently been confirmed for the human brain (Elliott et al 2008). See Fig. involving crucial neurotransmitters (such as dopamine) and triggering 4 . Lack of external validity is perhaps the most important reservation one can have about laboratory experimentation. because the regions that are active inside the laboratory happen to be powerful and pervasive. working with small gambles or with large gambles is the same. everything is just scaled. yet nature may have decided to provide us with an environment (multi‐period securities with skewed payoffs. nature rarely cares about our interests. Since the range of stimuli that the neurons are possibly confronted with is far beyond that of their firing. but that is not the point). Specifically. one may want to control the number of insiders (Bruguier et al 2008). yet the number of insiders is generally unknown in field markets. which are generally thought to be inadequate for the results to have any relevance for the “real world” (as if a laboratory is not real. This is highly unlikely. But the belief that decision making under uncertainty in the “real world” is fundamentally different requires proof that the brain regions that are activated in the laboratory become disengaged. even if their presence may be suspected at times [and exploited: see (Bossaerts & Hillion 1991)]. brain regions may become involved that are typically not activated in the laboratory. this mostly concerns the size of payments and risks. as we shall see. Adaptive encoding of rewards can also be observed elsewhere. delegated portfolio management. We may want to study the basic principles underlying CAPM (Bossaerts & Plott 2004). even when increasing the reward tenfold. To generate financial uncertainty caused by an intentional source. beyond a certain range. In experimental finance. the brain has opted for scaling. decision neuroscience gives an indication of how to think about this issue. Adaptive encoding may not be sufficient to establish external validity of laboratory experiments. …) least propitious for CAPM to emerge. but foremost because of the necessity to control parameters. Interestingly. There is of course a simple explanation for adaptive encoding (or scaling): there is a physical constraint on the amount of firing a given type of neuron can achieve. Indeed.The use of experiments is not only dictated by practical considerations (it would be hard to have a trader perform his or her day‐to‐day duties while in an fMRI scanner). activation remains the same. there is no guarantee that it extrapolates without limit. such as in the orbitofrontal cortex (the region right above the eyes). firing of dopamine neurons in the brainstem is independent of the size of the reward. Conversely for delivery of the low reward of such gambles.
because of consequences for health far into the future (Hare et al 2008a). clever choice of experimental design does allow experimenters to expose subjects to multi‐year horizons. have a significant time component. If the hypothetical questions cannot induce the same emotional reaction as when the risks at hand are experienced “first hand. The time horizon in recent experiments has been extended to several weeks (Kable & Glimcher 2007. Perception Of Risk And Reward One of the most fascinating findings of the last two or three years is that the brain separately encodes the risk and the reward when confronted with a gamble. This could explain why experimental economists have noticed that they obtain better results when they pay for performance. even if only indirectly. one could always resort to “thought experiments. But the problem is that these thought experiments may not engage the same brain processes as when experiencing the outcomes. all the evidence points to the contrary.” whereby gambles are represented in terms of their statistical properties (moments) rather than as a random variable living on an abstract state space. For instance. Field data also tell us that outcomes may need to be experienced before they significantly influence choice (Feng & Seasholes 2005). Indeed. Decision making by professional traders has been shown to be highly emotional (Lo & Repin 2002). one after the other. because that is when 5 . but not years. Likewise. Once the subject posts her bet (for $1). Many financial decision problems. when subjects are faced with nutritious food and tasty food. for instance. specifically to positive reward prediction errors (Saal et al 2003). The suggestion is all the more credible because the metric of risk encoded in the brain appears to be variance. 3. such as skewness and kurtosis. are an integral part of reasoned decision making under uncertainty. the two cards are displayed. (Ongoing work extends this to higher moments.strong emotional reactions (arousal. We are interested in brain activation when the cards are revealed. Of course. and it actuates hormones such as testosterone and cortisol (Coates & Herbert 2008) that have a direct influence on the very brain regions we see at work in the laboratory. McClure et al 2004). Two cards will be drawn from the deck. Subjects are presented with a deck of ten randomly shuffled cards. the alleged widespread use of pharmacological “enhancers” such as cocaine suggests attempts by professional traders to directly impact the dopamine neurons referred to above. causing projection neurons to become hypersensitive. such as retirement portfolio allocation. as we shall see. This suggests that the brain engages in “mean‐variance analysis. without replacement.…) that would need to be offset or neutralized somehow for them to become irrelevant. Emotions. Subjects are asked to bet whether the second card is going to be lower than the first.” predicted effects may not emerge. In fact. numbered 1 to 10.) How do we know this? From experiments like the following. Nevertheless.” by asking subjects for hypothetical choice in imagined situations [as in the original experiments in (Kahneman & Tversky 1979)]. choice effectively implies a substantial time dimension. fear. cocaine is known to affect the synaptic strength of dopamine neurons. It is a serious challenge to emulate such problems in the lab.
5 (Fig. Fig. explicit representation of probabilities would add a layer of complexity. and they project to the ventral striatum. both subcortical (O'Doherty et al 2004) and cortical (Hampton et al 2006). activations in a purely perceptual task like ours have been found to better predict choice than (past) choice itself.the brain should encode the corresponding changes in expected reward and risk (Fig. Fig. expected reward increases linearly and risk displays a symmetric. 3B shows mean activations across 19 subjects and 95% confidence intervals. In our card game. Also. but which we may not be ready to venture into yet. but these are primarily 6 . Again. With the exception of a small activation in the upper left part of the brain. Traditionally. the ventral striatum. There are also dopamine neurons in the substantia nigra. involving cards. Specifically. perceptual neural signals of valuation parameters provide better forecasts than preferences revealed through (past) choice. stratified by level of reward probability. 2A). which most subjects should be intimately familiar with. The link between this emotional reaction and prediction errors is intriguing. If expected reward and risk are encoded somewhere in the brain. Activation is measured in the first second after the display of card 1 (modulo the hemodynamic response delay). As a matter of fact. they were thought to cause arousal. Indeed. 3A are sub‐cortical projection areas of the dopamine neurons in the brainstem. which would be interesting. activation ought to exhibit the same patterns. with peak at probability 0. 2B). to shortcut the “mathematical” region of the brain that would be summoned to translate the abstract numerical data into something the reward and risk processing regions can deal with. In situations of free choice. in particular. and we are at first interested in perception. 3A displays a cross‐section of the human brain highlighting areas that were identified because activation in a fMRI (functional magnetic resonance imaging) scanner increased linearly in reward probability. 3C situates the reward‐tracking dopamine neurons in the ventral tegmental area of the brainstem. Notice that this is a very simple game. We already discussed the importance of the dopamine neurons in tracking rewards. especially when one is close to indifference (Berns et al 2008). however. There is a long path in the brain from perception to choice. which is required if the activations are to be associated with expected reward. expected reward (mathematical expectation of payoff) and reward risk (measured as variance) change each in their own way as a function of probability of reward. this is deliberately done. Probabilities are not mentioned explicitly. inverted U‐shaped pattern. additional engagement is needed. Kuhnen & Knutson 2005). This is not to say that the perceptual activations determine choice. This verifies that the activation patterns are indeed increasing and linear. and we shall elaborate later. Reward probability was computed from the bet that the subject chose and the number on the first card. notice that subjects really do not have to make any decision at the points in time that interest us (revelation of the two cards). the regions highlighted in Fig. Fig. That is. Does that mean that the resulting activations are irrelevant for choice? No: by now it is known that these activations predict choice (Christopoulos et al 2008.
and symmetric around 0. Fig. One of them. As we shall see. The results have been replicated many times [e. Niv et al 2008). and seem to be time‐locked to the display of card 2. Amygdala forms part of a circuit (which also includes the dopamine system) that has been associated with goal–directed learning (Balleine et al 2003). the difference between what was expected before the display of the stimulus (card 1) and the updated expectation or outcome (McClure et al 2003. Interestingly. 3 have been replicated many times. probabilities are “known. 4B displays the pattern of activation in one of those regions. another activation emerges. highest at about 0. Econometricians would refer to this signal as parameter estimation uncertainty (or its inverse..g. It confirms behavioral evidence that humans treat pure risk and 7 . The contrast of activation when probabilities are known (which economists would refer to as “pure risk”) against when these are not known (“ambiguity”) shows that the dopamine projection areas such as ventral striatum are less activated under ambiguity (Hsu et al 2005).5. left and right insula. and therefore not a prediction error. which in this case should be referred to as the risk prediction error.. mean activation is lowest for zero and unit reward probability. Activation in at least two additional cortical regions correlates with risk (Huettel et al 2005). 4A shows a cross‐section of the human brain with regions where activation changes quadratically with reward probability. it is phasic (meaning: of short duration) and time‐locked to the stimulus display. In our card task. namely. and. are involved in relaying information about “bodily state” – emotions – and sensory input (e. The results of Fig. 4 are significantly different from those in Fig 3. right insula. but more lateral). The timing and modulation of the signals displayed in Fig. This is consistent with the interpretation that they reflect expectation of risk that is to be resolved upon revelation of card 2. (Dreher et al 2005)]. That the activations represent reward prediction errors has recently been confirmed in an analysis that required minimal assumptions about the prior expectation (Caplin et al 2008. The regions indicated in Fig. the signals emerge later. It is interesting to observe that the brain likewise differentiates between these two aspects of uncertainty. In fact. As such. Amygdala appears to signal a need to start learning. O'Doherty et al 2003). the activation is backward‐looking.g.e. i. namely. the brain does also encode a corresponding prediction error. consistent with this. visual cues) to cortical areas. to be distinguished from the irreducible risk that remains in any stochastic environment even when probabilities are perfectly known. inferior frontal gyrus. we now know that this activation reflects reward prediction errors. but anticipation. and the thalamus. will be briefly mentioned later: its shallow location allows for direct electrical or magnetic manipulation. precision). with interesting behavioral effects. Fig.. 4A. 3C. are sustained.” in that subjects should be familiar with basic features of card games. it is forward‐looking. in amygdala (located approximately where the number “3” is displayed in Fig. Specifically.5. As required if this activation is to reflect reward variance. As such.engaged in other tasks (and damage to these neurons leads to Parkinson’s disease).
As 8 . The two views are not mutually exclusive. Amygdala had always been viewed as the “fear center” of the brain (Adolphs et al 1995. plays a crucial role in learning expected rewards. however (concavity for risk averse and convexity for risk seeking subjects). LeDoux et al 1990. are the value signals in the brain? Perhaps the activation displayed in Fig. the brain is best viewed as a computer with incredible processing capacity but a small hard disk. in insula (Preuschoff et al 2008). compared to choice in the 4th or 6th second (Bollard et al 2008).ambiguity differently. i. Risk prediction errors can also be observed in the other cortical structures where risk signals have been found (d'Acremont et al 2008). One of the striking aspects about brain activation in our card game is that signals of expected reward and risk (and corresponding errors) appear every time the game is played. This should inspire new modeling of bounded rationality. where profit opportunities can sometimes only be achieved if action is taken in a split second.e. Morris et al 1996). Amygdala has been associated with learning only recently. If this is a general property. and therefore. Econometricians would recognize this as the driving term of an ARCH process (Engle 2002). 5. In financial decision theory. we have discussed encoding of statistical features of a gamble in the human brain. Where. the brain insists on re‐evaluation of basic statistical features rather than simply retrieving some valuation (utility) number from memory. One therefore wonders whether this has an effect on choice when subjects are forced to make decisions before all neural computations can be completed (which would be within 2‐3 seconds after stimulus display). Signals correlating with risk prediction error in our card game have also been located. This obviously has important implications for design of continuous trading mechanisms.. Revealed preferences are indeed significantly different when subjects have to indicate their choice in the 2nd second. reward variance. Fear may be the emotional expression of estimation uncertainty. As mentioned before. to expected utility. Then again. 3 actually does not reflect expected reward prediction (errors). Evidently. overlapping with the areas where risk prediction had been found in prior studies. a recent experiment demonstrates that human manage to exploit the differentiation to optimize learning under ambiguity (Payzan & Bossaerts 2008). just like arousal accompanies positive reward prediction errors (and the latter two are both related to activation of the dopamine system). these are merely the inputs to valuation. then. See Fig. But constant re‐evaluation of the statistics of a gamble takes time. the risk prediction error is simply the difference between the squared reward prediction error and its anticipation. Indeed. but the link with fundamental concepts from econometrics suggests that the differentiation is not necessarily to be interpreted negatively. one should not over‐interpret the evidence. So far. but expected utility (errors). Mathematically. in contrast with tradition (Ellsberg 1961). as it is not entirely clear whether the fMRI signal is linear in firing of individual neurons. the dopamine system encodes a reward prediction error. We don’t see any nonlinearity.
and hence.a matter of fact. involved in expected reward encoding. we kept (reward) magnitude the same: bets were always for $1. One recent study did manipulate magnitudes. just like the evidence in Figs. This revealed strict concavity in the reward prediction error signals in ventral striatum. only probabilities were varied. 4. reward probability and reward magnitude? After all. equity correlates with activation in regions engaged in risk encoding. even patterns in neuronal firing should be interpreted with care. the degree of concavity increased with risk aversion (Tom et al 2007). While the idea of separately encoding utility variance and expected utility may seem strange at first. 3 and 4 supports mean‐variance analysis as a model of human financial valuation. The probability signal in ventral striatum even shows biases that are reminiscent of probability weighting in prospect theory (Berns et al 2008. insula. while efficiency activates ventral striatum. we don’t know. what are the risk signals in insula or inferior frontal gyrus about? Do these reflect variance of utility? At this moment. and amygdala. Emotions The regions of the human brain where we see clear risk and reward signals include the dopamine system. since signal precision decreases with firing rate. Moreover. in ventral striatum. Probability and magnitude are of course the crucial ingredients of expected utility. The human brain likewise separately tracks equity and efficiency in problems of moral judgment (Hsu et al 2008a). Why has the brain chosen to encode all four features of a gamble: expected reward. Significantly. It certainly lends support to models of moral decision making based on a trade‐off of equity against efficiency. in contrast with received wisdom (Kroll et al 1984)? Potential fundamental differences have recently been explored both theoretically and empirically (d'Acremont & Bossaerts 2008). This finding is unlikely to be coincidental. one could expect to observe a convex relationship between reward probability and firing. Interestingly. it should be recalled that it was Allais who first suggested it as an explanation for the violations of expected utility axioms that he observed in human choice (Allais 1953). in our card game. signals encoding probability of reward and magnitude of reward appear to live alongside signals that correlate with expected reward (Tobler et al 2007. Indeed. reward variance. such as insula. To complicate matters. Perhaps these two approaches are not equivalent. Also. These regions have in common one important feature of the human condition. even if the neuron at hand merely encodes expected reward. namely. and not expected reward. We 9 . or by combining expected reward and risk. Separate encoding of the key decision‐theoretic features of a problem is not limited to gambles. (expected) utility can be computed either by summing magnitudes after multiplication with probabilities. emotions. But if this means that the signals in ventral striatum reflect expected utility. Hsu et al 2008b). Preliminary evidence (Christopoulos et al 2008) suggests that the risk‐induced activations in insula and inferior frontal gyrus correlate with risk aversion. Yacubian et al 2006). do reflect subjective variance.
It is important to realize that emotion‐based decision making may be entirely sub‐conscious. (Bechara et al 1997) show that subjects who fail to express emotional anticipation when taking risky decisions (because of specific brain lesions) consistently go for inferior financial choices. we should mention a study where the ability to sense one’s own heartbeat was found to correlate with the size of the insula (Critchley et al 2004).already mentioned arousal in the context of dopamine (sometimes inflated artificially. All these can be found in the brain structures that we already discussed. This led the authors to posit that emotional processes guide reasoned decision making. through cocaine for instance). Another way to put this is that reasoned choice is the result of the balancing of emotional inputs. elation. Emotions are not bad per se. the evidence surveyed in the previous section suggests that activation in these same brain regions reflects encoding of “cool” mathematical quantities such as expected reward. etc. the consequences may be harmful.” “ambiguity aversion. reward prediction errors. 4). such decision making may be in conflict with choice based on explicit reasoning. envy. estimation uncertainty. and irrational “regret avoidance. an updated interpretation of the findings would be that emotions are somehow subsumed in the mathematics of optimal decision making. are to be viewed as the result of maladapted weighing of the different emotional aspects of a gamble. Human decision making under uncertainty thus takes on features of the celebrated multi‐attribute choice theory (Lancaster 1966). and insula (or at least the anterior part. As a matter of fact. Has the brain decided to relate these mathematical quantities. to emotions? In the same vein. gloating are associated with mathematical error signals such as (in the same order): (negative and positive) reward prediction errors. fictive play prediction errors. and the fact that humans often show 10 . where we observe risk signals). Coricelli et al 2005). At times. risk (measured as variance). At the same time. its speed may preclude conscious brain processes from interfering. which they referred to as the Somatic Marker Hypothesis (Bechara & Damasio 2005). and prediction errors relative to other people’s rewards (Bault et al 2008. relief. recent studies have shown that strong emotions such as disappointment. When that somatic marker is absent. albeit that “characteristics” need to be replaced by the emotional aspects of a gamble. This of course would explain why (emotional) experience is important for rational choice (Feng & Seasholes 2005). ARCH‐like risk prediction errors. one may conjecture that the human body has chosen to employ a somatic marker (heartbeat) to encode risk. is known to be a crucial relay of emotions to the cortex. Together with the findings that (i) the heartbeat of professional traders correlates with risk (Lo & Repin 2002) and that (ii) insula encodes risk (Fig. usually associated with cool‐headedness. regret. Amygdala has been associated with fear. Because of the mathematical accuracy of the signals encoded in the emotional regions of the human brain. In the context of insula.” etc.
When prospects were framed in terms of losses. and reward opportunities (Montague et al 1996. 5) underscores this. At one level. Indeed. as well as the adjacent anterior cingulate cortex are thought to be involved in explicit. the existence of a value signal is likely to be confirmed. is used in machine learning to solve complex dynamic programming problems. subjects who showed least (or no) impact from framing also simultaneously displayed increased prefrontal activation. suggesting attempts of the conscious system to control quick emotional reactions (Bossaerts et al 2008b. conscious decision making. A recent study on framing (De Martino et al 2006) does illustrate how this may work. as mentioned before). if acts follow the instructions of motor or pre‐motor neurons that fire most intensely. humans are capable of opportunistically switching back and forth between these two ways of generating decisions. it is no too fast for conscious experience. It should be repeated that emotion‐based decision making need not be bad or unsophisticated. Schultz et al 1997). Kahneman & Frederick 2002). The first handbook in neuroeconomics is mostly devoted to this issue (Glimcher et al 2009). While the arrival of this secondary signal is surprisingly quick. Indeed. but the precise neurobiology remains poorly understood. Prefrontal and lateral orbitofrontal cortical regions. However. EEG may provide insight. The mathematical precision with which activation in insula correlates with an ARCH‐like risk prediction error (Fig. Consistent with this conjecture. the latter reveal a learning algorithm that is exquisitely adapted to environments with multiple stimuli. logically reasoned choice has led many to propose dualsystem theory (Evans 2003. Kahneman & Frederick 2007). because it allows for better time resolution. In a recent study on the impact of disappointment on subsequent decision making. this firing must be in accordance with maximization of some utility function if the resulting acts satisfy the axioms of revealed preference. emotion‐based decision making as well as slow. (Platt & Glimcher 1999) found that firing of neurons in the 11 .signs of both fast. subjects appeared to take on more risk. called temporal difference (TD) learning. 5. According to this theory. a secondary EEG signal emerged in frontal regions after 375 milliseconds but only in individuals who successfully overcame the detrimental effect of reflexive reaction to disappointment (Tzieropoulos et al 2008). This very same algorithm. one of the main issues that decision neuroscientists have been interested in is whether there are value signals in the brain. Nevertheless. How do we know that the secondary activation is really conscious? We don’t (yet). Value Signals Standard economic thinking (including behavioral finance) starts from the premise that choice results from maximization of value. Partly under the influence of economists. As do the reward prediction errors that dopamine neurons encode. and this behavior was associated with increased activation of amygdala (the “fear” center of the human brain. action possibilities.
but the value signals show the right properties at the right time so that even the most skeptical mind would be convinced of the hypothesis of Bayesian learning.. and more lateral activation (and corresponding to the activation identified in the Macaque brain) correlates with economic surplus (i. 6).g. 12 . utility net of cost). activation in insula suggests that the endowment effect is the consequence of the level of risk that the subject perceives. this cognitive bias is not directly reflected in the above value signals. the signals in lateral orbito‐frontal cortex exhibit the characteristics one would expect if they are to convey expected utility: they increase with expected reward. For one thing. But this ignores the fact that WTP is generally elicited using the Becker‐DeGroot‐Marshak (BDM) mechanism. Tremblay & Schultz 1999). It is true that the behavioral data already marginally favored Bayesian updating (over simple reinforcement learning). In the context of gambles. (Hampton et al 2006) have been able to show that value signals in a randomly reverting two‐ armed bandit task reveal learning that is distinctly Bayesian. The finding suggests novel hypotheses. Value signals have also shed light on the endowment effect. while they increase with risk for risk‐seeking subjects (Fig. The presence of value signals is handy in many respects. such as the dependence of the difference between WTP and WTA on risk aversion. Which is precisely what has been shown recently (Plott & Zeiler 2005). the willingness to pay (WTP) is lower because she perceives less risk in her starting position (cash endowment). the evidence in favor of signals that economists would recognize as “value” is well established.. while activation in ventral striatum represents value prediction errors (as already discussed). but may be the result of reference‐based encoding in ventral striatum and insula (De Martino et al 2009). It also demonstrates that the endowment effect in this case results from a fundamental mis‐conception of the BDM mechanism (or a violation of the independence axiom). Specifically. But what about the executive regions in the front of the brain. E. for risk‐averse subjects. Hence.. because the gamble is to be sold for cash (i. The willingness to ask (WTA) is higher. which induces risk. The latter may seem strange. Valuation signals have indeed been discovered in the orbitofrontal cortex of the Macaque monkey (Padoa‐Schioppa & Assad 2006. This has recently been confirmed for the human brain (Plassmann et al 2007). because she is now endowed with the gamble and is faced with exchanging this for another risky position.e. evolution of this signal over time teaches us many things about learning. yet decrease with risk (variance). Specifically. it should be avoidable through appropriate training.intraparietal sulcus (a fold in the lateral cortex beyond the brain midline) of the Macaque brain correlated with expected utility. a risk free position). In the context of gambles. and extended (Hare et al 2008b): activation in medial prefrontal cortex correlates with willingness‐to‐pay. far removed from the motor cortex? By now.e.
the error from predicting how one’s opponent would shift strategy as a result of one’s own moves. we have considered uncertainty that is “objective. a possibility that has been raised only occasionally in the game theory learning literature (Camerer et al 2002. Paracingulate cortex is indeed preferentially activated during play of strategic games with other humans. Valuation is performed using standard expected utility. and updating follows Bayesian principles. (Coricelli & Nagel 2008) show how skill in the beauty contest and mathematical proficiency were unrelated.” in the sense that it is generated by a source without goal‐directness or intention behind it. psychology has long differentiated dealing with an intentional opponent. this signal effectively pushes human game play beyond simple reinforcement learning. Significantly.” namely. The resulting equilibrium concepts are straightforward Bayesian extensions of Nash equilibrium. regions traditionally associated with formal mathematical and logical thinking were engaged. 13 . But neuroscience has suggested that there are fundamental differences in the way humans approach uncertainty from an intentional choice. Opposed to this are situations where a strategic agent causes the uncertainty (an animal. or perhaps a market with insiders?). In financial markets with insiders. For instance. fictitious play or some mixture [experience weighted attraction learning (Camerer & Ho 1999)]. this finding has recently been confirmed (Coricelli & Nagel 2008). As such. Assessing the risk in such a situation has become known as “Theory of Mind” or “mentalizing” and specific regions of the brain seem to be dedicated to it. and indeed.6. but has settled on analyses that do not distinguish it from “objective” uncertainty. Game theory has long had to cope with this type of uncertainty. activation in a simple game of imperfect information is more like that in situations of ambiguity. McCabe et al 2001). This discovery still obtains when the game is far more complex. Uncertainty Created By An Intentional Source So far. Stahl 2000). despite the fact that the mathematics tests required the very calculations that are implicit in successful game play. more so than with computers programmed with a simple rule (Gallagher et al 2002. not pure risk (Hsu et al 2005). in neither study. skill in predicting future price movements does not correlate with mathematical proficiency (Bruguier et al 2008). a sophisticated computer. it demonstrates that humans recognize the intentionality of the target (opponent) they are learning about. a human. Most significantly. most importantly paracingulate cortex (the region of the frontal cortex above the orbitofrontal cortex). cyto‐architecturally a relatively recent brain region (Gallagher & Frith 2003). Likewise. Exploiting the simple formal structure of the beauty contest. (Hampton et al 2008) have explored the mathematical computations in this region during game play and discovered that it encoded an “influence update.
Likewise. and that these emotional reactions appear to affect utility. The alternative for the brain would have been to set up a costly parallel learning system that simulates the utility of others. to evaluate pricing patterns in financial markets. playing strategic games even if. humans often appear to do remarkably well. even in this context. not maximization of utility. the reward prediction error automatically includes options chosen by others. Choice biases can be expected to arise when the brain faces a new adversary or an unknown environment. that it provides plausibility and discipline to modeling of human decision making. As such. It teaches us that values are not merely retrieved from memory. it is well known that humans envy and gloat. Why would envy and gloat affect utility? It can be conjectured that this leads the reward prediction error (difference between experienced utility and expected utility) to react. but the blindness of such a system renders its cognitive use questionable. And conscious control. the efficient markets hypothesis. For instance. Neurobiology does ask us to evaluate everything in terms of a single criterion. evolutionary plausibility. namely. future choice (Bault et al 2008).7. although only for neoclassicists. in the sense that it did not emerge from a thorough study of the human brain and its capacity.Discussion What should a finance scholar make of all this neurobiology? In the first place. Such a criterion should be familiar to finance scholars. so that fictive learning is ineffective. however. who have been applying their own. say. Nevertheless. the success of the continuous double auction in generating efficient outcomes despite its strategic complexity (Friedman & Rust 1993) and the facility with which humans can detect and exploit insider information in it (Bruguier et al 2008) suggest that certain designs exploit the computational capabilities of the human brain better than others. they do not seem to be good at the mathematical computations implicit in their play. or. that future discoveries in decision neuroscience will facilitate targeted development of new tools for improved financial decision making and 14 . That is. This should matter especially in situations where the consequences of unchosen actions are hard to assess. absence of arbitrage opportunities. Learning of optimal strategies is thereby enhanced dramatically. but also to what others chose to do. It is hoped. it suggests why humans can be good at. without reference to one’s own utility. from perception to action. choice is the outcome of a process. and hence. with the risk of occasional mis‐estimation. But the development of the double auction was accidental. Novel conjectures will be the by‐product of any insistence that theories of human decision making be validated neurobiologically. namely. For instance. it suggests where one should place emotions. but re‐computed every time. when asked explicitly. Alleged cognitive biases remain suspect until they make sense in an ecologically relevant setting. Or it could have chosen a “mirror system” (Rizzolatti & Craighero 2004) that stores actions of others without recording their value. not only to what an individual did herself (as in traditional reinforcement learning) or what she failed to do [fictive learning like in (Lohrenz et al 2007)]. Financial markets may be a prime example of the brain charting new territory. For instance.
one can of course also envisage simple affect control (Kuhnen & Knutson 2008). Pharmacological remedy is also within reach [e. Direct control of risk attitudes is now possible.exchange. just like autistic patients can now be instructed to look in the eyes of others to overcome their handicap in discerning the intention of others (Spezio et al 2007). where. For instance. but humans have clear limitations when asked to solve problems for which their brains are maladapted. we can teach tricks. timing restrictions.g.. through electrical (Fecteau et al 2007) or magnetic (Knoch et al 2006) stimulation of the inferior frontal gyrus. Humans are much better at certain tasks than computers. Still. But such control raises ethical issues. This failure should not be surprising. but their scope is limited. Among such tools.? To the extent that decision making under uncertainty engages powerful emotional processes and pervasive neurotransmitter release. The goal of decision neuroscience is to identify the limitations and to introduce computers only when in need to overcome them. as we discussed before. and the brain will be fooled any time it is inattentive. Such tricks are useful. glutocorticoid receptor agonists block the synaptic effects of stress on dopamine neurons in the ventral tegmental area (Saal et al 2003)]. And investors should be warned for the side‐effects of medication [such as L‐Dopa (Gschwandtner et al 2001)] that interferes with the brain systems involved in decision making under uncertainty. without resorting to tools such as double auctions. 15 . Nor did they have to be concerned with finance until recently. because the human brain and its evolutionary antecedents have never had to fly. pharmacological remedies. The forgoing also answers a commonly raised question: should computers replace humans in financial markets? The answer is no. a risk signal resides. further study of how long it takes the brain to re‐evaluate gambles each time they are presented should inspire timing restrictions on trading platforms with the aim of avoiding choice biases induced by time pressure. this will be difficult. but it really does not go away. One also wonders whether it is possible to teach humans to become better financial decision makers. Just as the horizon was invented as a tool to help human pilots (who incidentally have still not been replaced with computers)… The horizon helps because orientation fails when flying through clouds. such as looking for GARCH features to detect the presence of insiders (Bruguier et al 2008). etc. because what we are asking for is the equivalent of controlling a visual bias. A visual bias is harmless if one knows when it occurs.
Line segments indicate 95% confidence intervals. stratified by revealed risk attitude. Location of (some of the) activations that are quadratic in reward probability: Left and Right anterior insula (ins) and mediodorsal nucleus of the thalamus (md) (front of the brain is on top). line segments indicate 95% confidence intervals.Figure Captions Figure 1 Median firing of dopamine neurons in the brains of two Macaque monkeys after receiving no or little reward vs.15ml causes the same firing as when a low reward of 0ml is anticipated (Animal B). when beliefs (expected reward. Location of activation that correlates with risk prediction error upon display of card 1: Left and Right anterior insula. Figure 5 A. reward variance) are updated in function of the number on the card and the bet. Source: (Preuschoff et al 2006) and openlearn. vertical lines indicate 95% confidence intervals. Figure 6 A. C. slightly to the right of the medial line (eye ball is visible in bottom right corner). Figure 2 A. Outlier at zero risk prediction error refers to trials when no risk remained after card 1. various higher rewards (juice). Brain signals that encode these statistical features should show similar relationships. Source: (Tobler et al 2007). among others. B. per subject. Location of activation correlating with expected utility of gambles: prefrontal cortex. Source: (Preuschoff et al 2006). They project. but were perfectly anticipated. showing key projection areas of dopamine neurons in the midbrain. Firing shows adaptation to situation: reaction to delivery of 0. B. Activation in this location as a function of expected reward and reward variance.ac. Animal A and B). Source: (Tobler et al 2005b). delivery of a low reward of 0. B. Expected reward and reward variance as a function of probability of reward. Of interest are brain activations resulting from display of card 1 (and card 2).05ml is no different from that when ten times the amount is delivered (0. B. Cross‐section of the brain from front (Right) to back. Figure 3 A. to the ventral striatum (region activated in A). Mean activation in Right anterior insula (ins) across 19 subjects (diamonds) as a function of reward probability shows the inverted U‐shaped pattern needed for it to encode reward variance. Likewise. Source: (Preuschoff et al 2006). Reward magnitudes changed from one trial to another. 16 . Mean activation in Right anterior insula across 19 subjects (diamonds) as a function of risk prediction error upon display of card 1 (red) and card 2 (blue). Location of (some of the) activations that increase linearly in reward probability: Left and Right ventral striatum (vst). B. Dopamine neurons encoding reward prediction errors can be found in the ventral tegmental area. Time line of card game. and the monkey had learned that the higher reward would be delivered with 50% chance in all cases. Source: (Preuschoff et al 2008).50ml. Mean activation in Left ventral striatum (vst) across 19 subjects (diamonds) stratified by reward probability.uk Figure 4 A.
Emotions are an integral part of reasoned decision making. such as disappointment. trading interfaces that exploit natural skills.Summary Points 1. ecologically more relevant. and even pharmacological decision support. Choice can be viewed. problem. 5. this process includes constant re‐evaluation of statistical features. 8. The claim that laboratory experiments in finance do not have external validity can only be upheld if it is demonstrated that the brain circuitry engaged in the laboratory somehow becomes neutralized. 17 . as the result of a delicate balancing of the emotional features of a decision problem. elation. regret. These valuation signals can be found in executive areas of the human brain that are remote from the motor cortex. An improved understanding of the neurobiological foundations of human decision making under uncertainty should lead to the development of effective tools of assistance. envy and gloating. This is highly unlikely. that are not directly involved in the acts that implement a choice. from perception to action. In the context of gambles. 7. While the theory of games of imperfect information borrows largely from financial decision theory. and hence. Brain regions that encode the mathematical features of a decision problem are often also involved in the corresponding emotional reactions. Choice is the outcome of a long neuro‐physiological process. From the statistics of gambles. because that circuitry is in control of pervasive neurotransmitters and of strong emotional responses. suggesting that emotions may largely be subsumed in the mathematics of decision models. the brain constructs the very valuations (utilities) that are at the core of modern economic thinking. relief. Cognitive biases need to be understood as a mismatch between a novel environment and a neurobiological system that is optimized for a different. such as appropriate emotional control. 4. 6. to a large extent. the human brain deals differently with uncertainty generated by an intentional source than with “objective” uncertainty. just like financial market phenomena are judged against absence of rbitrage (or the efficient markets hypothesis). 3. but in the evaluation of the options at hand. 2. Cognitive biases need to be evaluated against evolutionary fitness.
3A are effectively maps of t statistics of the treatment effects. The “functional” in fMRI stands for the fact that a large part of the brain is scanned in its entirety about every 2 seconds. the effect comes with some delay (about 4s). a rather invasive procedure done in humans only if there is a medical need and even then the location of the electrode is determined by medical concern and not scientific value. Basically. so that changes in activation can be picked up at different stages of a task. a number of imaging techniques have become available that allow one to non‐invasively but indirectly measure firing of clusters of neurons or. Functional magnetic resonance imaging (fMRI) is one of them. Pictures of brain activation like Fig. whereby only activations beyond a cut‐off p level are retained (generally 0.Sidebar Tracking Neuronal Firing Firing of individual neurons can be tracked by inserting an electrode in the brain. 18 .001 or less). fMRI localizes the presence of oxygen‐ rich blood. more accurately. referred to as hemodynamic response delay. While oxygen‐rich blood will concentrate where there is activation. Recently. activation of projection areas triggered by firing of upstream neurons (Goense & Logothetis 2008).
Engage valuation circuitry as if they were primary (Izuma et al 2008). acetylcholine. Dopamine One type of neurotransmitter (others include seretonin. glutamate). 19 . norepinephrine. but difficult source localization.Mini‐Glossary Neuronal Firing Electrical impulses that are sent from the neuron cell body outward through its axon (an extension of varying length). Neurotransmitter Chemical released from the axon following firing that activates a dendrite of the projection neuron across the synaptic cleft. with excellent time resolution. EEG (Electroencephalogram) Recording on the scalp of electrical activity generated by the brain. Primary Reinforcers Physical rewards such as juice Secondary Reinforcers Indirect rewards such as money or social reputation.
Experimental Finance.S.Keywords Neuroeconomics. Emotions Acknowledgment The author thanks the Swiss Finance Institute and the U. 20 . Risk. National Science Foundation (Grant SES‐0527491) for financial support.
Neural correlates of expected value. Dolan R. Review of Finance 8: 135‐69 Bossaerts P. In Neuroeconomics: Decision Making and the Brain. N. Le Comportement de l'Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l'Ecole Americaine. PloS one 3: e3477 Bechara A. Killcross AS. Lakshminarayanan V. Journal of Economic Theory 104: 137‐88 Caplin A. 1997. J. 2008. Interdependent Utilities: How Social Ranking Affects Choice Behavior. The somatic marker hypothesis: A neural theory of economic decision. Moore S. Hsu M. Ho T‐H. Schultz W. Science 275: 1293‐5 Berns GS. The Effect of Lesions of the Basolateral Amygdala on Instrumental Conditioning. Damasio AR. Basic Principles of Asset Pricing Theory: Evidence from Large‐Scale Experimental Financial Markets. Tranel D. Bossaerts P. Rutledge RB. 2008. risk. Ambiguity in asset markets: theory and experiment. How Basic Are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Damasio AR. Capra M. Plott C. Bossaerts P. Risk and Reward Preferences Under Time Pressure. Liu R. 2008. Damasio H. Hillion P. Under Review Chen MK. 1995. Bossaerts P. Neurosci. 2008b. Santos LR.References Adolphs R. 2003. 2008. Econometrica 67: 827‐74 Camerer CF. 1991. Econometrica 21: 503‐46 Balleine BW. Dickinson A. E Fehr. Nonlinear Neurobiological Probability Weighting Functions For Aversive Outcomes. RA Poldrack. J. The neurobiological foundations of valuation in human decision‐making under uncertainty. 15: 5879‐91 Allais M. Under Review 21 . C. Games and Economic Behavior 52: 336‐72 Bechara A. Ho TH. and risk aversion contributing to decision making under risk. Chong J‐K. London: Academic Press Bruguier A. Sophisticated Experience‐Weighted Attraction Learning and Strategic Teaching in Repeated Games. 2008a. Under Review Camerer C. Exploring the Nature of “Trading Intuition". 23: 666‐ 75 Bault Ng. Under Review Bossaerts P. Damasio AR. Market Microstructure Effects of Government Intervention in the Foreign Exchange Market. Ghirardato P. 2004. Chappelow J. Guarnaschelli S. Rangel A. Under Review Bossaerts P. ed. Rustichini A. PW Glimcher. 1953. Tobler P. Zame WR. Quartz S. Journal of Political Economy 114: 517‐37 Christopoulos G. Dean M. 2005. Coricelli G. 2008. Damasio H. Tranel D. The Review of Financial Studies 4: 513‐41 Bossaerts P. 2002. Nursimulu AD. 1999. Neurosci. Experience‐weighted Attraction Learning in Normal Form Games. Glimcher PW. 2006. Deciding Advantageously Before Knowing the Advantageous Strategy. Preuschoff K. NeuroImage Bollard A. 2008. Fear and the human amygdala. Measuring beliefs and rewards: A neuroeconomic approach. CF Camerer.
Endogenous steroids and financial risk taking on a London trading floor. Medial orbitofrontal cortex codes relative rather than absolute value of financial rewards in humans. Functional imaging of 'theory of mind'. 2004. 1961. Boggio P. 2007. Sirigu A. & Behavioral Neuroscience 8: 363‐74 d'Acremont M. Frames. Knoch D. European Journal of Neuroscience 27: 2213‐8 Ellsberg D. Agnew Z. Dolan RJ. 2008. The Double Auction Market: Institutions. Neural systems supporting interoceptive awareness. 2008. Nat Neurosci 8: 1255‐62 Coricelli G. Ohman A. 2001. Pathologic Gambling in Patients with Parkinson's Disease. Herbert J. Frith C. Decision Making and The Brain. Aston J. J. Do Investor Sophistication and Trading Experience Eliminate Behavioral Biases in Financial Markets? Review of Finance 9: 305‐51 Friedman D. Dolan RJ. A neural basis of strategic thinking. Under Review Critchley HD. Li X. Bossaerts P. Ambiguity. 2003. The neurobiology of reference‐ dependent value computation. Cereb Cortex Elliott R. Affective. Cognitive. 2008. Frith CD. Imaging the intentional stance in a competitive game. and the Savage Axioms. Neural Coding of Distinct Statistical Properties of Reward Information in Humans. Pascual‐Leone A. Neurobiological studies of risk assessment: A comparison of expected utility and mean‐variance approaches. Regret and its avoidance: a neuroimaging study of choice behavior. and Rational Decision‐Making in the Human Brain. 2005. New frontiers for ARCH models. Lu Z‐L. 2009. Renaud S. Seasholes MS. Roepstorff A. Wiens S. Journal of Neuroscience In Print De Martino B. 2002. Kumaran D. Neural correlates of risk prediction errors during reinforcement learning in humans. Trends in Cognitive Sciences 7: 454‐9 Fecteau S. Clinical neuropharmacology 24: 170 22 . Beauty contest in the brain. NeuroImage 16: 814‐21 Glimcher PW. The Quarterly Journal of Economics 75: 643‐69 Engle RF. Fuhr P. Jack A. 2005. Gallagher HL. Neuroeconomics. Seymour B. Fehr E. Rotshtein P. Critchley HD. 2002. Dolan RJ. Deakin JFW. 2003. 429 pp. 2008. Camerer CF. Theories. Dolan RJ. Van der Linden M. Fregni F. Berman KF. Trends in Cognitive Sciences 7: 77‐83 Gallagher HL. Joffily M. 2008. UK: Academic Press Gschwandtner U. and Evidence: Westview Press. Under Review De Martino B. Sultani N. Neurosci. Nagel R. O'Doherty JP. Risk. Diminishing Risk‐Taking Behavior by Modulating Activity in the Prefrontal Cortex: A Direct Current Stimulation Study. Journal of Applied Econometrics 17: 425‐46 Evans JSBT. Bechara A. 2009. 2005. Proceedings of the National Academy of Sciences 105: 6167‐72 Coricelli G. In two minds: dual‐process accounts of reasoning. eds. Nat Neurosci 7: 189‐95 d'Acremont M. London. Kohn P. 2006.Coates JM. Biases. 27: 12500‐5 Feng L. Holt B. Science 313: 684‐7 Dreher JC. Rust J. Kumaran D. eds. Poldrack RA. 1993.
Knutson B. Bossaerts P. Representativeness Revisited: Attribute Substitution in Intuitive Judgment. Neural systems responding to degrees of uncertainty in human decision making. pp. Schultz W. Regard M. The psychophysiology of real‐time financial risk processing. Self‐control in decision‐making involves modulation of the vmPFC valuation system. 2008b. DW Griffin. The Right and the Good: Distributive Justice and Neural Encoding of Equity and Efficiency. Camerer CF. Neurosci. Romanski LM. The neural basis of financial risk taking. In Heuristics and Biases: The Psychology of Intuitive Judgment. 2002. J Cogn Neurosci 14: 323‐39 23 . O'Doherty JP. 2006. ed. Frames and brains: elicitation and control of response tendencies. Science 310: 1680‐3 Hsu M. Frederick S. Neurosci. Gianotti LRR. Journal of Finance 39: 47‐61 Kuhnen CM. Trends in Cognitive Sciences 11: 45‐6 Kahneman D. Knutson B. 2006. 2007. Frederick S. Dissociating the Role of the Orbitofrontal Cortex and the Striatum in the Computation of Goal Values and Prediction Errors. et al. Nat Neurosci 10: 1625‐33 Kahneman D. The Journal of Political Economy 74: 132 LeDoux JE. Levy H. 2007. 2005. Disruption of Right Prefrontal Cortex by Low‐Frequency Repetitive Transcranial Magnetic Stimulation Induces Risk‐Taking Behavior. Prospect theory: An Analysis of Decision Under Risk. Repin DV. Neuron 47: 763‐70 Kuhnen CM. 2008a. O'Doherty J. Treyer V. Neural correlates of mentalizing‐ related computations during strategic interactions in humans. 2005. O'Doherty JP. Anen C. Bhatt M. Bossaerts P. The Influence of Affect on Beliefs. Decisions under Uncertainty: Probabilistic Context Influences Activation of Prefrontal and Parietal Cortices. Zhao C. Mean‐Variance versus Direct Utility Maximization. 1990. Rangel A. 49‐81: Cambridge University Press Kahneman D. 2002. Camerer CF. Neurosci. D Kahneman. Quartz SR. Under Review Hare TA. McCarthy G. The Role of the Ventromedial Prefrontal Cortex in Abstract State‐Based Inference during Decision Making in Humans. Neurosci. J. Neurosci. Rangel A. 1966. Proceedings of the National Academy of Sciences 105: 6741‐6 Hare TA. The lateral amygdaloid nucleus: sensory interface of the amygdala in fear conditioning. 2008a. Pascual‐Leone A. J. J. 2005. Krajbich I. J. A New Approach to Consumer Theory. Camerer CF. J. T Gilovich. Neural response to reward anticipation under risk is nonlinear in probabilities. 2008. Glimcher PW. 2008b. 25: 3304‐11 Kable JW. Science 320: 1092‐5 Hsu M. 26: 6469‐72 Kroll Y. Preferences and Financial Decisions: SSRN Lancaster KJ. Tranel D. 26: 8360‐7 Hampton AN. 10: 1062‐9 Lo AW. Econometrica 47: 263‐91 Knoch D. Adolphs R. Song A. Cicchetti P. 2008. Xagoraris A. The neural correlates of subjective value during intertemporal choice. Markowitz HM. Tversky A. 1979. Under Review Huettel S. 1984. 28: 5623‐30 Hsu M.Hampton AN. Camerer CF.
Dolan RJ. 1996. Human Insula Activation Reflects Risk Prediction Errors As Well As Risk. Schultz J. Neuron 38: 329 Padoa‐Schioppa C. Montague PR. Nature 400: 233‐8 Plott CR. Dayan P. 2001. 1997. Dayan P. 2003. Under Review Plassmann H. Laibson DI. J Neurosci 16: 1936‐47 Morris JS. Camerer CF. Edlund JA. 2007. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. Zeiler K. A Neural Substrate of Prediction and Reward. Cohen JD. 1999. Neurons in the orbitofrontal cortex encode economic value. Science 304: 452‐4 O'Doherty JP. Dayan P. 37: 577‐82 Schultz W. Berns GS. Neurosci. 2008. the "Endowment Effect. Rowland D. O'Doherty J.Lohrenz T." Subject Misconceptions. Bossaerts P. Trouard T. Perrett DI. 27: 9984‐8 Platt ML. Frith CD. Neural Differentiation of Expected Reward and Risk in Human Subcortical Structures. Houser D. Glimcher PW. 2008. Quartz SR. Craighero L. Neural signature of fictive learning signals in a sequential investment task. Temporal Difference Models and Reward‐Related Learning in the Human Brain. Young AW. Friston K. McCabe K. Assad JA. 28: 2745‐52 Rizzolatti G. Temporal prediction errors in a passive learning task activate human striatum. Proceedings of the National Academy of Sciences 98: 11832‐5 McClure SM. Smith V. Critchley H. Neurosci. The American Economic Review 95: 530‐45 Preuschoff K. Neural correlates of decision variables in parietal cortex. Ryan L. Friston K. 2004. Dissociable Roles of Ventral and Dorsal Striatum in Instrumental Conditioning. Loewenstein G. Neuron 38: 339‐46 McClure SM. Montague PR. A functional imaging study of cooperation in two‐person reciprocal exchange. Dayan P. Proceedings of the National Academy of Sciences 104: 9493‐8 McCabe K. Rangel A. 1996. Science 275: 1593‐9 24 . J. The Willingness to Pay‐Willingness to Accept Gap. Under Review O'Doherty J. 2005. O'Doherty JP. Separate Neural Systems Value Immediate and Delayed Monetary Rewards. and Experimental Procedures for Eliciting Valuations. J. Sejnowski TJ. 2004. 2006. Malenka RC. 2003. et al. Dayan P. Bossaerts P. 2006. Neuron 51: 381‐90 Preuschoff K. 2007. Nature 441: 223‐6 Payzan E. Dong Y. Dolan RJ. Science 306: 503‐7 Montague PR. 2003. Deichmann R. Learning and Choice under Nonstationary and Unknown Probabilities. Bonci A. Drugs of Abuse and Stress Trigger a Common Synaptic Adaptation in Dopamine Neurons. A differential neural response in the human amygdala to fearful and happy facial expressions. Annual Review of Neuroscience 27: 169‐92 Saal D. 2004. 2008. Montague PR. A bird in the hand versus two in the tree: Neural prediction errors reveal subjective risk sensitivity. The Mirror‐Neuron System. Quartz S. Orbitofrontal Cortex Encodes Willingness to Pay in Everyday Economic Transactions. Nature 383: 812‐5 Niv Y. Bossaerts P.
Beierholm U.Shams L. Adaptive Coding of Reward Value by Dopamine Neurons. Under review Yacubian J. Neurosci. Science 315: 515‐8 Tremblay L. 2005. Nature 398: 704‐8 Tzieropoulos H. Science 307: 1642‐5 Tobler PN. Dolan RJ. Rule Learning in Symmetric Normal‐Form Games: Theory and Evidence. Schultz W. rationality the exception. Adolphs R. J Neurophysiol 97: 1621‐32 Tom SM. Fiorillo CD. Reward value coding distinct from risk attitude‐related uncertainty coding in human reward systems. Glascher J. 26: 9530‐7 25 . In Man. Schultz W. 2007. Sommer T. Trepel C. Ma WJ. 2000. Fox CR. 2007. Relative reward preference in primate orbitofrontal cortex. Dissociable Systems for Gain‐ and Loss‐Related Value Predictions and Errors of Prediction in the Human Brain. Abnormal Use of Facial Information in High‐Functioning Autism. Adaptive coding of reward value by dopamine neurons. 1999. Hurley R. Games and Economic Behavior 32: 105‐38 Tobler PN. Schroeder K. Grave de Peralta R. NeuroReport 16: 1923 Spezio M. Braus DF. Sound‐induced flash illusion as an optimal percept. 2008. Gonzalez S. Piven J. The Neural Basis of Loss Aversion in Decision‐Making Under Risk. madness is the rule. 2005a. O'Doherty JP. Poldrack RA. 2007. J. Journal of Autism and Developmental Disorders 37: 929‐39 Stahl DO. Schultz W. Science 307: 1642‐5 Tobler PN. 2006. Fiorillo CD. 2005b. Schultz W. Bossaerts P. Buchel C.
Figure 1 Figure 2 .
Figure 3 C Figure 4 A B .
Figure 5 Figure 6 .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.