I recently completed a finance paper on the implications of prospect theory for portfolio choice and asset pricing. I worked on this with Prof. Enrico De Giorgi during my post doc at the Swiss Finance Institute. This post is meant as an introduction to this work; the full paper can be downloaded here.
Finance models, like all mathematical models, suffer from the following problem: if you don’t make the initial assumptions simple and easy to work with the theoretical analysis that follows is too difficult to manage. In finance this usually translates into assuming that investors are fully informed, completely rational and are just out to maximise their expected future utility. You also tend to assume that the returns on risky assets, for example stocks, follow geometric Brownian motion in continuous time, or have returns that are log-normal distributed when working in discrete time. These assumptions are somewhat close to reality, but simple enough to permit theoretical analysis.
So what does the analysis say? Among other things, it says that people should be investing a large proportion of their wealth into stocks. In reality, however, most people don’t own any stock, and most of those who do don’t have a particularly large proportion of their wealth in stocks unless they are very wealthy. Perhaps this is ok in a prescriptive sense, i.e. telling you that you really should consider owning more stock. However, as a descriptive model of investors, i.e. describing what investors do and why, they seriously fail. Playing with the parameters does not save you: in order to get people holding so little stock you have to push the level of people’s risk aversion up far beyond the range of values that have been empirically estimated. Thus, if our theoretical analysis is correct and produces the wrong answers, it must be that our basic assumptions were wrong.
This isn’t really news, indeed it’s well known that people are not rational expected utility maximisers. When we have to make decisions, all sorts of cognitive biases and distortions come into play. Seminal work in this area was done by Kahneman and Tversky. They produced a model of human decision making known as prospect theory, work that Kahneman later won a Noble prize for (sadly Tversky died some years before the award). Due to some technical problems, this was later refined to produce cumulative prospect theory, which I will now very superficially describe.
Cumulative prospect theory consists of five main components:
1) narrow framing. What this means is that when you have to make a decision, say to invest in a stock or to make a gamble, you tend to act as though this decision was being taken in isolation. This makes sense given that making an optimal decision with respect to all the risks you are facing in your entire life, the strictly rational thing to do, is often too complex.
2) reference return. Imagine that the market went up 20% and you made a 10% return on your chosen investments. You probably wouldn’t be happy with that. On the other hand if the market fell 10% and you made a 5% gain you would be pretty pleased with yourself. What this shows is that the utility that you get from an investment is not simply a function of the actual return, but also depends on how that return compares to some mental point of reference that you have.
3) loss aversion. For most people, the pain of a losing $100 is about twice the magnitude of the pleasure of gaining $100. Clearly, this distorts people’s decision making. For example, people may pass up opportunities to make a gain in order to avoid a loss that is comparatively small.
4) probability weighting. People tend to distort probabilities when they make decisions. They act as though low probability events, say winning the lottery or getting a rare disease, is more likely than it really is. Conversely, they act as though quite likely events are slightly less likely than they really are.
5) curved value function. If you have a guaranteed gain of $100,000 or a very likely gain of $110,000, which would you take? Most would take the first option. If you had a guaranteed loss of $100,000 or a likely but not for sure loss of $110,000 what would you take? Most would take the second option. In other words, people are risk averse with respect to gains, but become much more willing to take risks when facing a potential loss.
Given all these deviations from being a simple rational expected utility maximiser, it’s perhaps no surprise that financial models that assume expected utility maximisation produce results that don’t match real investor behaviour. The problem, as I mentioned earlier, is that if you add a little more complexity to your initial assumptions you tend to end up with a model that is impossible to theoretically analyse.
In step Barberis and Huang. In a great paper that these two recently produced, they managed to incorporate the first three aspects of prospect theory into an investor model and still come out with an analysis that is tractable for portfolio choice (what do investors do) and asset pricing (what do markets made up of these investors do). If you find this stuff interesting and can handle mathematical finance at a research level I recommend that you check it out. That said, the Barberis and Huang paper does have some draw backs. Firstly, it doesn’t include probability weighting. Secondly, it doesn’t include a curved value function. And thirdly, when you put in realistic stock returns their model doesn’t help explain things like the lack of stock market participation that I mentioned earlier, in fact it actually makes it worse.
In step De Giorgi and myself. Having recently completed my PhD thesis with Marcus Hutter, I looked at these investors and thought, “Hey, they’re just like reinforcement learning agents. No big deal. If I want to know what investors with probability weighting and a curved value function do, I can just brute force compute their optimal policy by writing down their Bellman equation and using dynamic programming. Easy!” It was a mystery to me why, seemingly, nobody else was doing that. So off I went to build software to do just this, starting with a simple Merton model…
A month went by. Another month went by. I was having all kinds of accuracy and stability problems and Enrico was starting to look a bit worried. More time went by. I too was starting to sweat. Eventually, thank goodness, I managed to understand my problems and worked out how to fix them. So I then attempted a more complex model. After some more weeks of struggle I managed to get that to work as well. I was starting to get the hang of this and know the key tricks needed to make it work. I then recomputed the Barberis and Huang results, I added probability weighting, and finally I also added a curved value function. I seriously had no idea how hard this was going to be when I started. Somehow my ignorance combined with stubbornness not to fail eventually produced success: a general purpose simulation engine that can tell me how just about any kind of consistent investor is going to behave, including ones that use a full version of cumulative prospect theory.
At this point De Giorgi made a simple but important observation: if an investor uses probability weighting, that means that they are going to inflate the importance of low probability events when making decisions, which is to say that they are going to be more sensitive to the tails of the stock’s returns distribution. Typically we assume that returns have a log-normal distribution, for tractability reasons. However, a log-normal distribution has a positive skew, indeed, the distribution’s negative tail ends at 0. Real stock returns, on the other hand, have a negative skew: everybody knows that sudden falls in a stock’s price occur more often than equally large and sudden rises. Thus, if we are going to put probability weighting in, we really need to get the skew right as the tails of the distribution are likely to be important.
What we did was to take S & P 500 data from the last 60 or so years and fit a skew-normal distribution to the observed returns. A skew-normal distribution is basically just a generalisation of the normal distribution that has an extra parameter that allows you to control the skew. As expected, when we fitted a skew-normal distribution to real data it did indeed come back with a negative skew. When we fired up my simulator and gave this distribution to an investor that had probability weighting: the investor took one look at that scary negative tail and didn’t want to invest in the stock. This is exactly what the model should predict. In short, we took realistic stock returns, and presented this to an investor with a realistic decision making process complete with a bunch of parameters that have been empirically estimated by others in previous work, and what we got out the other end was realistic investor behaviour!
Following this, I went back to look at the Barberis and Huang model and how they computed investor behaviour. Rather than my brute force approach, they had a much more elegant technique. It didn’t take me long to realise that their method could be extended to include probability weighting, giving me a fast way to compute the behaviour of these investors. Attempts to include a curved value function into their method failed, for that we had to continue to use my brute force simulator. However, analysis of these investors, both theoretically and via simulation, produced another interesting effect: wealthy investors act as if they are not narrow framing their investment decisions as much as less wealthy investors, and thus the effect above applies more to poorer investors rather than wealth ones. Which is to say that when we take a full cumulative prospect theory model of investors and realistic stock returns, what we see is that less wealthy people don’t hold much stock, while wealthy people tend to put a significant proportion of their wealth into stocks. Again these results match reality as various studies show that stock market participation increases rapidly as a function of the individual’s wealth. We then extended this analysis to a market consisting of investors with probability weighting and skewed asset returns and found it easy to obtain realistic risk free rates and market equity premiums.
I guess the moral to this story is: if you want to get realistic answers out of models of investors, you probably need to take account for the ways in which they deviate from a strictly expectations maximising agent. The hard part, with much help from the prior work of Barberis and Huang, was to come up with ways to make the resulting analysis theoretically and computationally tractable.
I’d like to thank Enrico De Giorgi for being a great supervisor and collegue during this work, and the Swiss Finance Institute in Lugano (via De Giorgi) and St. Gallen (thanks to Fabio Trojani) for my funding, and finally the people of Switzerland whose taxes ultimately paid for all this — I can only hope that you view our efforts as having been worthy of your generosity.
Shane, another great article. I am now motivated to go and read up on prospect theory!
Glad you liked it 🙂
Nice work! What does this refined model say about the equity premium?
Thanks.
The paper is mostly focused on the portfolio choice implications rather than the asset pricing implications. For the latter we see that it’s pretty easy to get a reasonable risk free and equity premium (around 2% and 5% respectively) with standard parameters for everything in the model and the narrow framing parameter set to about 0.035 (see table 6, bottom of last two panels, rightmost two columns).
The narrow framing parameter controls how much the narrow framing utility contributes to the total utility. Nobody knows what this parameter is empirically so it’s hard to say whether this is a reasonable value.
Interesting article. Thanks for posting.