hal varian uncertainty

CHAPTER12
UNCERTAINTY
Uncertainty is a fact of life. People face risks every time they take a shower,
walk across the street, or make an investment. But there are financial institutions
such as insurance markets and the stock market that can mitigate
at least some of these risks. We will study the functioning of these markets
in the next chapter, but first we must study individual behavior with
respect to choices involving uncertainty.
12.1 Contingent Consumption
Since we now know all about the standard theory of consumer choice, let’s
try to use what we know to understand choice under uncertainty. The first
question to ask is what is the basic “thing” that is being chosen?
The consumer is presumably concerned with the probability distribution
of getting different consumption bundles of goods. A probability
distribution consists of a list of different outcomes—in this case, consumption
bundles—and the probability associated with each outcome. When a
consumer decides how much automobile insurance to buy or how much to
218 UNCERTAINTY (Ch. 12)
invest in the stock market, he is in effect deciding on a pattern of probability
distribution across different amounts of consumption.
For example, suppose that you have $100 now and that you are contemplating
buying lottery ticket number 13. If number 13 is drawn in the
lottery, the holder will be paid $200. This ticket costs, say, $5. The two
outcomes that are of interest are the event that the ticket is drawn and the
event that it isn’t.
Your original endowment of wealth—the amount that you would have if
you did not purchase the lottery ticket—is $100 if 13 is drawn, and $100
if it isn’t drawn. But if you buy the lottery ticket for $5, you will have
a wealth distribution consisting of $295 if the ticket is a winner, and $95
if it is not a winner. The original endowment of probabilities of wealth
in different circumstances has been changed by the purchase of the lottery
ticket. Let us examine this point in more detail.
In this discussion we’ll restrict ourselves to examining monetary gambles
for convenience of exposition. Of course, it is not money alone that matters;
it is the consumption that money can buy that is the ultimate “good”
being chosen. The same principles apply to gambles over goods, but restricting
ourselves to monetary outcomes makes things simpler. Second,
we will restrict ourselves to very simple situations where there are only a
few possible outcomes. Again, this is only for reasons of simplicity.
Above we described the case of gambling in a lottery; here we’ll consider
the case of insurance. Suppose that an individual initially has $35,000
worth of assets, but there is a possibility that he may lose $10,000. For
example, his car may be stolen, or a storm may damage his house. Suppose
that the probability of this event happening is p = .01. Then the probability
distribution the person is facing is a 1 percent probability of having $25,000
of assets, and a 99 percent probability of having $35,000.
Insurance offers a way to change this probability distribution. Suppose
that there is an insurance contract that will pay the person $100 if the loss
occurs in exchange for a $1 premium. Of course the premium must be paid
whether or not the loss occurs. If the person decides to purchase $10,000
dollars of insurance, it will cost him $100. In this case he will have a 1
percent chance of having $34,900 ($35,000 of other assets − $10,000 loss +
$10,000 payment from the insurance payment – $100 insurance premium)
and a 99 percent chance of having $34,900 ($35,000 of assets − $100 insurance
premium). Thus the consumer ends up with the same wealth no
matter what happens. He is now fully insured against loss.
In general, if this person purchases K dollars of insurance and has to pay
a premium γK, then he will face the gamble:1
probability .01 of getting $25, 000 + K − γK
1 The Greek letter γ, gamma, is pronounced “gam-ma.”
CONTINGENT CONSUMPTION 219
and
probability .99 of getting $35, 000 − γK.
What kind of insurance will this person choose? Well, that depends on
his preferences. He might be very conservative and choose to purchase a lot
of insurance, or he might like to take risks and not purchase any insurance
at all. People have different preferences over probability distributions in
the same way that they have different preferences over the consumption of
ordinary goods.
In fact, one very fruitful way to look at decision making under uncertainty
is just to think of the money available under different circumstances as
different goods. A thousand dollars after a large loss has occurred may
mean a very different thing from a thousand dollars when it hasn’t. Of
course, we don’t have to apply this idea just to money: an ice cream cone
if it happens to be hot and sunny tomorrow is a very different good from
an ice cream cone if it is rainy and cold. In general, consumption goods will
be of different value to a person depending upon the circumstances under
which they become available.
Let us think of the different outcomes of some random event as being
different states of nature. In the insurance example given above there
were two states of nature: the loss occurs or it doesn’t. But in general
there could be many different states of nature. We can then think of
a contingent consumption plan as being a specification of what will
be consumed in each different state of nature—each different outcome of
the random process. Contingent means depending on something not yet
certain, so a contingent consumption plan means a plan that depends on the
outcome of some event. In the case of insurance purchases, the contingent
consumption was described by the terms of the insurance contract: how
much money you would have if a loss occurred and how much you would
have if it didn’t. In the case of the rainy and sunny days, the contingent
consumption would just be the plan of what would be consumed given the
various outcomes of the weather.
People have preferences over different plans of consumption, just like
they have preferences over actual consumption. It certainly might make
you feel better now to know that you are fully insured. People make choices
that reflect their preferences over consumption in different circumstances,
and we can use the theory of choice that we have developed to analyze
those choices.
If we think about a contingent consumption plan as being just an ordinary
consumption bundle, we are right back in the framework described in
the previous chapters. We can think of preferences as being defined over
different consumption plans, with the “terms of trade” being given by the
budget constraint. We can then model the consumer as choosing the best
consumption plan he or she can afford, just as we have done all along.
220 UNCERTAINTY (Ch. 12)
Let’s describe the insurance purchase in terms of the indifference-curve
analysis we’ve been using. The two states of nature are the event that the
loss occurs and the event that it doesn’t. The contingent consumptions are
the values of how much money you would have in each circumstance. We
can plot this on a graph as in Figure 12.1.
$25,000 + K – γK Cb
$35,000 – γK
Endowment
Choice
Slope = –
γ
1 – γ
Cg
$25,000
$35,000
Figure
12.1
Insurance. The budget line associated with the purchase of
insurance. The insurance premium γ allows us to give up some
consumption in the good outcome (Cg) in order to have more
consumption in the bad outcome (Cb).
Your endowment of contingent consumption is $25,000 in the “bad”
state—if the loss occurs—and $35,000 in the “good” state—if it doesn’t
occur. Insurance offers you a way to move away from this endowment
point. If you purchase K dollars’ worth of insurance, you give up γK dollars
of consumption possibilities in the good state in exchange for K −γK
dollars of consumption possibilities in the bad state. Thus the consumption
you lose in the good state, divided by the extra consumption you gain in
the bad state, is
ΔCg
ΔCb
= − γK
K − γK
= − γ
1 − γ
.
This is the slope of the budget line through your endowment. It is just
as if the price of consumption in the good state is 1 − γ and the price in
the bad state is γ.
CONTINGENT CONSUMPTION 221
We can draw in the indifference curves that a person might have for contingent
consumption. Here again it is very natural for indifference curves
to have a convex shape: this means that the person would rather have a
constant amount of consumption in each state than a large amount in one
state and a low amount in the other.
Given the indifference curves for consumption in each state of nature,
we can look at the choice of how much insurance to purchase. As usual,
this will be characterized by a tangency condition: the marginal rate of
substitution between consumption in each state of nature should be equal
to the price at which you can trade off consumption in those states.
Of course, once we have a model of optimal choice, we can apply all of
the machinery developed in early chapters to its analysis. We can examine
how the demand for insurance changes as the price of insurance changes,
as the wealth of the consumer changes, and so on. The theory of consumer
behavior is perfectly adequate to model behavior under uncertainty as well
as certainty.
EXAMPLE: Catastrophe Bonds
We have seen that insurance is a way to transfer wealth from good states
of nature to bad states of nature. Of course there are two sides to these
transactions: those who buy insurance and those who sell it. Here we focus
on the sell side of insurance.
The sell side of the insurance market is divided into a retail component,
which deals directly with end buyers, and a wholesale component, in which
insurers sell risks to other parties. The wholesale part of the market is
known as the reinsurance market.
Typically, the reinsurance market has relied on large investors such as
pension funds to provide financial backing for risks. However, some reinsurers
rely on large individual investors. Lloyd’s of London, one of the most
famous reinsurance consortia, generally uses private investors.
Recently, the reinsurance industry has been experimenting with catastrophe
bonds, which, according to some, are a more flexible way to provide
reinsurance. These bonds, generally sold to large institutions, have
typically been tied to natural disasters, like earthquakes or hurricanes.
A financial intermediary, such as a reinsurance company or an investment
bank, issues a bond tied to a particular insurable event, such as an
earthquake involving, say, at least $500 million in insurance claims. If
there is no earthquake, investors are paid a generous interest rate. But if
the earthquake occurs and the claims exceed the amount specified in the
bond, investors sacrifice their principal and interest.
Catastrophe bonds have some attractive features. They can spread risks
widely and can be subdivided indefinitely, allowing each investor to bear
222 UNCERTAINTY (Ch. 12)
only a small part of the risk. The money backing up the insurance is paid
in advance, so there is no default risk to the insured.
From the economist’s point of view, “cat bonds” are a form of state
contingent security, that is, a security that pays off if and only if some
particular event occurs. This concept was first introduced by Nobel laureate
Kenneth J. Arrow in a paper published in 1952 and was long thought
to be of only theoretical interest. But it turned out that all sorts of options
and other derivatives could be best understood using contingent securities.
Now Wall Street rocket scientists draw on this 60-year-old work when
creating exotic new derivatives such as catastrophe bonds.
12.2 Utility Functions and Probabilities
If the consumer has reasonable preferences about consumption in different
circumstances, then we will be able to use a utility function to describe these
preferences, just as we have done in other contexts. However, the fact that
we are considering choice under uncertainty does add a special structure
to the choice problem. In general, how a person values consumption in one
state as compared to another will depend on the probability that the state
in question will actually occur. In other words, the rate at which I am
willing to substitute consumption if it rains for consumption if it doesn’t
should have something to do with how likely I think it is to rain. The
preferences for consumption in different states of nature will depend on the
beliefs of the individual about how likely those states are.
For this reason, we will write the utility function as depending on the
probabilities as well as on the consumption levels. Suppose that we are
considering two mutually exclusive states such as rain and shine, loss or
no loss, or whatever. Let c1 and c2 represent consumption in states 1 and
2, and let π1 and π2 be the probabilities that state 1 or state 2 actually
occurs.
If the two states are mutually exclusive, so that only one of them can
happen, then π2 = 1− π1. But we’ll generally write out both probabilities
just to keep things looking symmetric.
Given this notation, we can write the utility function for consumption in
states 1 and 2 as u(c1, c2, π1, π2). This is the function that represents the
individual’s preference over consumption in each state.
EXAMPLE: Some Examples of Utility Functions
We can use nearly any of the examples of utility functions that we’ve seen
up until now in the context of choice under uncertainty. One nice example
is the case of perfect substitutes. Here it is natural to weight each
EXPECTED UTILITY 223
consumption by the probability that it will occur. This gives us a utility
function of the form
u(c1, c2, π1, π2) = π1c1 + π2c2.
In the context of uncertainty, this kind of expression is known as the expected
value. It is just the average level of consumption that you would
get.
Another example of a utility function that might be used to examine
choice under uncertainty is the Cobb–Douglas utility function:
u(c1, c2, π, 1 − π) = cπ1
c1−π
2 .
Here the utility attached to any combination of consumption bundles depends
on the pattern of consumption in a nonlinear way.
As usual, we can take a monotonic transformation of utility and still
represent the same preferences. It turns out that the logarithm of the
Cobb-Douglas utility will be very convenient in what follows. This will
give us a utility function of the form
ln u(c1, c2, π1, π2) = π1 ln c1 + π2 ln c2.
12.3 Expected Utility
One particularly convenient form that the utility function might take is the
following:
u(c1, c2, π1, π2) = π1v(c1) + π2v(c2).
This says that utility can be written as a weighted sum of some function
of consumption in each state, v(c1) and v(c2), where the weights are given
by the probabilities π1 and π2.
Two examples of this were given above. The perfect substitutes, or
expected value utility function, had this form where v(c) = c. The Cobb-
Douglas didn’t have this form originally, but when we expressed it in terms
of logs, it had the linear form with v(c) = lnc.
If one of the states is certain, so that π1 = 1, say, then v(c1) is the utility
of certain consumption in state 1. Similarly, if π2 = 1, v(c2) is the utility
of consumption in state 2. Thus the expression
π1v(c1) + π2v(c2)
represents the average utility, or the expected utility, of the pattern of
consumption (c1, c2).
224 UNCERTAINTY (Ch. 12)
For this reason, we refer to a utility function with the particular form
described here as an expected utility function, or, sometimes, a von
Neumann-Morgenstern utility function.2
When we say that a consumer’s preferences can be represented by an
expected utility function, or that the consumer’s preferences have the expected
utility property, we mean that we can choose a utility function that
has the additive form described above. Of course we could also choose a different
form; any monotonic transformation of an expected utility function
is a utility function that describes the same preferences. But the additive
form representation turns out to be especially convenient. If the consumer’s
preferences are described by π1 ln c1 + π2 ln c2 they will also be described
by cπ1
1 cπ2
2 . But the latter representation does not have the expected utility
property, while the former does.
On the other hand, the expected utility function can be subjected to
some kinds of monotonic transformation and still have the expected utility
property. We say that a function v(u) is a positive affine transformation
if it can be written in the form: v(u) = au + b where a > 0. A
positive affine transformation simply means multiplying by a positive number
and adding a constant. It turns out that if you subject an expected
utility function to a positive affine transformation, it not only represents
the same preferences (this is obvious since an affine transformation is just a
special kind of monotonic transformation) but it also still has the expected
utility property.
Economists say that an expected utility function is “unique up to an
affine transformation.” This just means that you can apply an affine transformation
to it and get another expected utility function that represents
the same preferences. But any other kind of transformation will destroy
the expected utility property.
12.4 Why Expected Utility Is Reasonable
The expected utility representation is a convenient one, but is it a reasonable
one? Why would we think that preferences over uncertain choices
would have the particular structure implied by the expected utility function?
As it turns out there are compelling reasons why expected utility is
a reasonable objective for choice problems in the face of uncertainty.
The fact that outcomes of the random choice are consumption goods
that will be consumed in different circumstances means that ultimately
only one of those outcomes is actually going to occur. Either your house
2 John von Neumann was one of the major figures in mathematics in the twentieth
century. He also contributed several important insights to physics, computer science,
and economic theory. Oscar Morgenstern was an economist at Princeton who, along
with von Neumann, helped to develop mathematical game theory.
WHY EXPECTED UTILITY IS REASONABLE 225
will burn down or it won’t; either it will be a rainy day or a sunny day. The
way we have set up the choice problem means that only one of the many
possible outcomes is going to occur, and hence only one of the contingent
consumption plans will actually be realized.
This turns out to have a very interesting implication. Suppose you are
considering purchasing fire insurance on your house for the coming year. In
making this choice you will be concerned about wealth in three situations:
your wealth now (c0), your wealth if your house burns down (c1), and your
wealth if it doesn’t (c2). (Of course, what you really care about are your
consumption possibilities in each outcome, but we are simply using wealth
as a proxy for consumption here.) If π1 is the probability that your house
burns down and π2 is the probability that it doesn’t, then your preferences
over these three different consumptions can generally be represented by a
utility function u(π1, π2, c0, c1, c2).
Suppose that we are considering the tradeoff between wealth now and
one of the possible outcomes—say, how much money we would be willing
to sacrifice now to get a little more money if the house burns down. Then
this decision should be independent of how much consumption you will have
in the other state of nature—how much wealth you will have if the house
is not destroyed. For the house will either burn down or it won’t. If it
happens to burn down, then the value of extra wealth shouldn’t depend
on how much wealth you would have if it didn’t burn down. Bygones are
bygones—so what doesn’t happen shouldn’t affect the value of consumption
in the outcome that does happen.
Note that this is an assumption about an individual’s preferences. It may
be violated. When people are considering a choice between two things, the
amount of a third thing they have typically matters. The choice between
coffee and tea may well depend on how much cream you have. But this
is because you consume coffee together with cream. If you considered a
choice where you rolled a die and got either coffee, or tea, or cream, then
the amount of cream that you might get shouldn’t affect your preferences
between coffee and tea. Why? Because you are either getting one thing or
the other: if you end up with cream, the fact that you might have gotten
either coffee or tea is irrelevant.
Thus in choice under uncertainty there is a natural kind of “independence”
between the different outcomes because they must be consumed
separately—in different states of nature. The choices that people plan to
make in one state of nature should be independent from the choices that
they plan to make in other states of nature. This assumption is known as
the independence assumption. It turns out that this implies that the
utility function for contingent consumption will take a very special structure:
it has to be additive across the different contingent consumption
bundles.
That is, if c1, c2, and c3 are the consumptions in different states of nature,
and π1, π2, and π3 are the probabilities that these three different states of
226 UNCERTAINTY (Ch. 12)
nature materialize, then if the independence assumption alluded to above
is satisfied, the utility function must take the form
U(c1, c2, c3) = π1u(c1) + π2u(c2) + π3u(c3).
This is what we have called an expected utility function. Note that the
expected utility function does indeed satisfy the property that the marginal
rate of substitution between two goods is independent of how much there
is of the third good. The marginal rate of substitution between goods 1
and 2, say, takes the form
MRS12 = −ΔU(c1, c2, c3)/Δc1
ΔU(c1, c2, c3)/Δc2
= −π1Δu(c1)/Δc1
π2Δu(c2)/Δc2
.
This MRS depends only on how much you have of goods 1 and 2, not
how much you have of good 3.
12.5 Risk Aversion
We claimed above that the expected utility function had some very convenient
properties for analyzing choice under uncertainty. In this section
we’ll give an example of this.
Let’s apply the expected utility framework to a simple choice problem.
Suppose that a consumer currently has $10 of wealth and is contemplating
a gamble that gives him a 50 percent probability of winning $5 and a
50 percent probability of losing $5. His wealth will therefore be random:
he has a 50 percent probability of ending up with $5 and a 50 percent
probability of ending up with $15. The expected value of his wealth is $10,
and the expected utility is
1
2
u($15) +
1
2
u($5).
This is depicted in Figure 12.2. The expected utility of wealth is the
average of the two numbers u($15) and u($5), labeled .5u(5) + .5u(15) in
the graph. We have also depicted the utility of the expected value of wealth,
which is labeled u($10). Note that in this diagram the expected utility of
wealth is less than the utility of the expected wealth. That is,
u

1
2
15 +
1
2
5

= u (10) >
1
2
u (15) +
1
2
u (5) .
RISK AVERSION 227
UTILITY
u(15)
u(10)
.5u(5) + .5u(15)
u(5)
u(wealth)
5 10 15 WEALTH
Risk aversion. For a risk-averse consumer the utility of the
expected value of wealth, u(10), is greater than the expected
utility of wealth, .5u(5) + .5u(15).
Figure
12.2
In this case we say that the consumer is risk averse since he prefers
to have the expected value of his wealth rather than face the gamble. Of
course, it could happen that the preferences of the consumer were such
that he prefers a a random distribution of wealth to its expected value, in
which case we say that the consumer is a risk lover. An example is given
in Figure 12.3.
Note the difference between Figures 12.2 and 12.3. The risk-averse consumer
has a concave utility function—its slope gets flatter as wealth is increased.
The risk-loving consumer has a convex utility function—its slope
gets steeper as wealth increases. Thus the curvature of the utility function
measures the consumer’s attitude toward risk. In general, the more concave
the utility function, the more risk averse the consumer will be, and the
more convex the utility function, the more risk loving the consumer will be.
The intermediate case is that of a linear utility function. Here the consumer
is risk neutral: the expected utility of wealth is the utility of its
expected value. In this case the consumer doesn’t care about the riskiness
of his wealth at all—only about its expected value.
EXAMPLE: The Demand for Insurance
Let’s apply the expected utility structure to the demand for insurance that
we considered earlier. Recall that in that example the person had a wealth
228 UNCERTAINTY (Ch. 12)
u (15)
u (5)
u (10)
.5u (5) + .5u (15)
5 10 15 WEALTH
UTILITY
u (wealth)
Figure
12.3
Risk loving. For a risk-loving consumer the expected utility
of wealth, .5u(5) + .5u(15), is greater than the utility of the
expected value of wealth, u(10).
of $35,000 and that he might incur a loss of $10,000. The probability of the
loss was 1 percent, and it cost him γK to purchase K dollars of insurance.
By examining this choice problem using indifference curves we saw that
the optimal choice of insurance was determined by the condition that the
MRS between consumption in the two outcomes—loss or no loss—must be
equal to −γ/(1−γ). Let π be the probability that the loss will occur, and
1 − π be the probability that it won’t occur.
Let state 1 be the situation involving no loss, so that the person’s wealth
in that state is
c1 = $35, 000 − γK,
and let state 2 be the loss situation with wealth
c2 = $35, 000 − $10, 000 + K − γK.
Then the consumer’s optimal choice of insurance is determined by the
condition that his MRS between consumption in the two outcomes be equal
to the price ratio:
MRS = − πΔu(c2)/Δc2
(1 − π)Δu(c1)/Δc1
= − γ
1 − γ
. (12.1)
Now let us look at the insurance contract from the viewpoint of the
insurance company. With probability π they must pay out K, and with
RISK AVERSION 229
probability (1 − π) they pay out nothing. No matter what happens, they
collect the premium γK. Then the expected profit, P, of the insurance
company is
P = γK − πK − (1 − π) ·0 = γK − πK.
Let us suppose that on the average the insurance company just breaks
even on the contract. That is, they offer insurance at a “fair” rate, where
“fair” means that the expected value of the insurance is just equal to its
cost. Then we have
P = γK − πK = 0,
which implies that γ = π.
Inserting this into equation (12.1) we have
πΔu(c2)/Δc2
(1 − π)Δu(c1)/Δc1
=
π
1 − π
.
Canceling the π’s leaves us with the condition that the optimal amount of
insurance must satisfy
Δu(c1)
Δc1
=
Δu(c2)
Δc2
. (12.2)
This equation says that the marginal utility of an extra dollar of income if
the loss occurs should be equal to the marginal utility of an extra dollar of
income if the loss doesn’t occur.
Let us suppose that the consumer is risk averse, so that his marginal
utility of money is declining as the amount of money he has increases.
Then if c1 > c2, the marginal utility at c1 would be less than the marginal
utility at c2, and vice versa. Furthermore, if the marginal utilities of income
are equal at c1 and c2, as they are in equation (12.2), then we must have
c1 = c2. Applying the formulas for c1 and c2, we find
35, 000 − γK = 25, 000 + K − γK,
which implies that K = $10, 000. This means that when given a chance
to buy insurance at a “fair” premium, a risk-averse consumer will always
choose to fully insure.
This happens because the utility of wealth in each state depends only on
the total amount of wealth the consumer has in that state—and not what
he might have in some other state—so that if the total amounts of wealth
the consumer has in each state are equal, the marginal utilities of wealth
must be equal as well.
To sum up: if the consumer is a risk-averse, expected utility maximizer
and if he is offered fair insurance against a loss, then he will optimally
choose to fully insure.
230 UNCERTAINTY (Ch. 12)
12.6 Diversification
Let us turn now to a different topic involving uncertainty—the benefits
of diversification. Suppose that you are considering investing $100 in two
different companies, one that makes sunglasses and one that makes raincoats.
The long-range weather forecasters have told you that next summer
is equally likely to be rainy or sunny. How should you invest your money?
Wouldn’t it make sense to hedge your bets and put some money in each?
By diversifying your holdings of the two investments, you can get a return
on your investment that is more certain, and therefore more desirable if
you are a risk-averse person.
Suppose, for example, that shares of the raincoat company and the sunglasses
company currently sell for $10 apiece. If it is a rainy summer, the
raincoat company will be worth $20 and the sunglasses company will be
worth $5. If it is a sunny summer, the payoffs are reversed: the sunglasses
company will be worth $20 and the raincoat company will be worth $5. If
you invest your entire $100 in the sunglasses company, you are taking a
gamble that has a 50 percent chance of giving you $200 and a 50 percent
chance of giving you $50. The same magnitude of payoffs results if you
invest all your money in the sunglasses company: in either case you have
an expected payoff of $125.
But look what happens if you put half of your money in each. Then,
if it is sunny you get $100 from the sunglasses investment and $25 from
the raincoat investment. But if it is rainy, you get $100 from the raincoat
investment and $25 from the sunglasses investment. Either way, you end up
with $125 for sure. By diversifying your investment in the two companies,
you have managed to reduce the overall risk of your investment, while
keeping the expected payoff the same.
Diversification was quite easy in this example: the two assets were perfectly
negatively correlated—when one went up, the other went down. Pairs
of assets like this can be extremely valuable because they can reduce risk
so dramatically. But, alas, they are also very hard to find. Most asset
values move together: when GM stock is high, so is Ford stock, and so
is Goodrich stock. But as long as asset price movements are not perfectly
positively correlated, there will be some gains from diversification.
12.7 Risk Spreading
Let us return now to the example of insurance. There we considered the
situation of an individual who had $35,000 and faced a .01 probability of
a $10,000 loss. Suppose that there were 1000 such individuals. Then, on
average, there would be 10 losses incurred, and thus $100,000 lost each year.
Each of the 1000 people would face an expected loss of .01 times $10,000, or
ROLE OF THE STOCK MARKET 231
$100 a year. Let us suppose that the probability that any person incurs a
loss doesn’t affect the probability that any of the others incur losses. That
is, let us suppose that the risks are independent.
Then each individual will have an expected wealth of .99 × $35, 000 +
.01×$25, 000 = $34, 900. But each individual also bears a large amount of
risk: each person has a 1 percent probability of losing $10,000.
Suppose that each consumer decides to diversify the risk that he or she
faces. How can they do this? Answer: by selling some of their risk to
other individuals. Suppose that the 1000 consumers decide to insure one
another. If anybody incurs the $10,000 loss, each of the 1000 consumers
will contribute $10 to that person. This way, the poor person whose house
burns down is compensated for his loss, and the other consumers have the
peace of mind that they will be compensated if that poor soul happens
to be themselves! This is an example of risk spreading: each consumer
spreads his risk over all of the other consumers and thereby reduces the
amount of risk he bears.
Now on the average, 10 houses will burn down a year, so on the average,
each of the 1000 individuals will be paying out $100 a year. But this is just
on the average. Some years there might be 12 losses, and other years there
might be 8 losses. The probability is very small that an individual would
actually have to pay out more than $200, say, in any one year, but even so,
the risk is there.
But there is even a way to diversify this risk. Suppose that the homeowners
agree to pay $100 a year for certain, whether or not there are any
losses. Then they can build up a cash reserve fund that can be used in
those years when there are multiple fires. They are paying $100 a year
for certain, and on average that money will be sufficient to compensate
homeowners for fires.
As you can see, we now have something very much like a cooperative
insurance company. We could add a few more features: the insurance
company gets to invest its cash reserve fund and earn interest on its assets,
and so on, but the essence of the insurance company is clearly present.
12.8 Role of the Stock Market
The stock market plays a role similar to that of the insurance market in
that it allows for risk spreading. Recall from Chapter 11 that we argued
that the stock market allowed the original owners of firms to convert their
stream of returns over time to a lump sum. Well, the stock market also
allows them to convert their risky position of having all their wealth tied
up in one enterprise to a situation where they have a lump sum that they
can invest in a variety of assets. The original owners of the firm have an
incentive to issue shares in their company so that they can spread the risk
of that single company over a large number of shareholders.
232 UNCERTAINTY (Ch. 12)
Similarly, the later shareholders of a company can use the stock market
to reallocate their risks. If a company you hold shares in is adopting a
policy that is too risky for your taste—or too conservative—you can sell
those shares and purchase others.
In the case of insurance, an individual was able to reduce his risk to
zero by purchasing insurance. For a flat fee of $100, the individual could
purchase full insurance against the $10,000 loss. This was true because
there was basically no risk in the aggregate: if the probability of the loss
occurring was 1 percent, then on average 10 of the 1000 people would face
a loss—we just didn’t know which ones.
In the case of the stock market, there is risk in the aggregate. One year
the stock market as a whole might do well, and another year it might do
poorly. Somebody has to bear that kind of risk. The stock market offers a
way to transfer risky investments from people who don’t want to bear risk
to people who are willing to bear risk.
Of course, few people outside of Las Vegas like to bear risk: most people
are risk averse. Thus the stock market allows people to transfer risk from
people who don’t want to bear it to people who are willing to bear it if
they are sufficiently compensated for it. We’ll explore this idea further in
the next chapter.
Summary
1. Consumption in different states of nature can be viewed as consumption
goods, and all the analysis of previous chapters can be applied to choice
under uncertainty.
2. However, the utility function that summarizes choice behavior under
uncertainty may have a special structure. In particular, if the utility function
is linear in the probabilities, then the utility assigned to a gamble will
just be the expected utility of the various outcomes.
3. The curvature of the expected utility function describes the consumer’s
attitudes toward risk. If it is concave, the consumer is a risk averter; and
if it is convex, the consumer is a risk lover.
4. Financial institutions such as insurance markets and the stock market
provide ways for consumers to diversify and spread risks.
REVIEW QUESTIONS
1. How can one reach the consumption points to the left of the endowment
in Figure 12.1?
APPENDIX 233
2. Which of the following utility functions have the expected utility property?
(a) u(c1, c2, π1, π2) = a(π1c1 + π2c2), (b) u(c1, c2, π1, π2) = π1c1 +
π2c22
, (c) u(c1, c2, π1, π2) = π1 ln c1 + π2 ln c2 + 17.
3. A risk-averse individual is offered a choice between a gamble that pays
$1000 with a probability of 25% and $100 with a probability of 75%, or a
payment of $325. Which would he choose?
4. What if the payment was $320?
5. Draw a utility function that exhibits risk-loving behavior for small gambles
and risk-averse behavior for larger gambles.
6. Why might a neighborhood group have a harder time self insuring for
flood damage versus fire damage?
APPENDIX
Let us examine a simple problem to demonstrate the principles of expected utility
maximization. Suppose that the consumer has some wealth w and is considering
investing some amount x in a risky asset. This asset could earn a return of rg in
the “good” outcome, or it could earn a return of rb in the “bad” outcome. You
should think of rg as being a positive return—the asset increases in value, and
rb being a negative return—a decrease in asset value.
Thus the consumer’s wealth in the good and bad outcomes will be
Wg = (w − x) + x(1 + rg) = w + xrg
Wb = (w − x) + x(1 + rb) = w + xrb.
Suppose that the good outcome occurs with probability π and the bad outcome
with probability (1 − π). Then the expected utility if the consumer decides to
invest x dollars is
EU(x) = πu(w + xrg) + (1 − π)u(w + xrb).
The consumer wants to choose x so as to maximize this expression.
Differentiating with respect to x, we find the way in which utility changes as
x changes:
EU

(x) = πu

(w + xrg)rg + (1 − π)u

(w + xrb)rb. (12.3)
The second derivative of utility with respect to x is
EU

(x) = πu

(w + xrg)r2
g + (1 − π)u

(w + xrb)r2
b . (12.4)
If the consumer is risk averse his utility function will be concave, which implies
that u

(w) < 0 for every level of wealth. Thus the second derivative of expected
utility is unambiguously negative. Expected utility will be a concave function
of x.
234 UNCERTAINTY (Ch. 12)
Consider the change in expected utility for the first dollar invested in the risky
asset. This is just equation (12.3) with the derivative evaluated at x = 0:
EU

(0) = πu

(w)rg + (1 − π)u

(w)rb
= u

(w)[πrg + (1 − π)rb].
The expression inside the brackets is the expected return on the asset. If
the expected return on the asset is negative, then expected utility must decrease
when the first dollar is invested in the asset. But since the second derivative
of expected utility is negative due to concavity, then utility must continue to
decrease as additional dollars are invested.
Hence we have found that if the expected value of a gamble is negative, a risk
averter will have the highest expected utility at x

= 0: he will want no part of a
losing proposition.
On the other hand, if the expected return on the asset is positive, then increasing
x from zero will increase expected utility. Thus he will always want to
invest a little bit in the risky asset, no matter how risk averse he is.
Expected utility as a function of x is illustrated in Figure 12.4. In Figure 12.4A
the expected return is negative, and the optimal choice is x

= 0. In Figure 12.4B
the expected return is positive over some range, so the consumer wants to invest
some positive amount x

in the risky asset.
x* = 0 INVESTMENT x*
A B
EXPECTED
UTILITY
EXPECTED
UTILITY
INVESTMENT
Figure
12.4
How much to invest in the risky asset. In panel A, the optimal
investment is zero, but in panel B the consumer wants to invest a
positive amount.
The optimal amount for the consumer to invest will be determined by the
condition that the derivative of expected utility with respect to x be equal to zero.
Since the second derivative of utility is automatically negative due to concavity,
this will be a global maximum.
Setting (12.3) equal to zero we have
EU

(x) = πu

(w + xrg)rg + (1 − π)u

(w + xrb)rb = 0. (12.5)
This equation determines the optimal choice of x for the consumer in question.
APPENDIX 235
EXAMPLE: The Effect of Taxation on Investment in Risky Assets
How does the level of investment in a risky asset behave when you tax its return?
If the individual pays taxes at rate t, then the after-tax returns will be (1 − t)rg
and (1−t)rb. Thus the first-order condition determining his optimal investment,
x, will be
EU

(x) = πu

(w + x(1 − t)rg)(1 − t)rg + (1 − π)u

(w + x(1 − t)rb)(1 − t)rb = 0.
Canceling the (1 − t) terms, we have
EU

(x) = πu

(w + x(1 − t)rg)rg + (1 − π)u

(w + x(1 − t)rb)rb = 0. (12.6)
Let us denote the solution to the maximization problem without taxes—when
t = 0—by x

and denote the solution to the maximization problem with taxes
by ˆx. What is the relationship between x

and ˆx?
Your first impulse is probably to think that x

> ˆx—that taxation of a risky
asset will tend to discourage investment in it. But that turns out to be exactly
wrong! Taxing a risky asset in the way we described will actually encourage
investment in it!
In fact, there is an exact relation between x

and ˆx. It must be the case that
ˆx =
x

1 − t
.
The proof is simply to note that this value of ˆx satisfies the first-order condition
for the optimal choice in the presence of the tax. Substituting this choice into
equation (12.6) we have
EU

(ˆx) = πu

(w +
x

1 − t
(1 − t)rg)rg
+ (1 − π)u

(w +
x

1 − t
(1 − t)rb)rb
= πu

(w + x

rg)rg + (1 − π)u

(w + x

rb)rb = 0,
where the last equality follows from the fact that x

is the optimal solution when
there is no tax.
What is going on here? How can imposing a tax increase the amount of
investment in the risky asset? Here is what is happening. When the tax is
imposed, the individual will have less of a gain in the good state, but he will
also have less of a loss in the bad state. By scaling his original investment up
by 1/(1 − t) the consumer can reproduce the same after-tax returns that he had
before the tax was put in place. The tax reduces his expected return, but it also
reduces his risk: by increasing his investment the consumer can get exactly the
same pattern of returns he had before and thus completely offset the effect of the
tax. A tax on a risky investment represents a tax on the gain when the return is
positive—but it represents a subsidy on the loss when the return is negative.
CHAPTER13
RISKY
ASSETS
In the last chapter we examined a model of individual behavior under
uncertainty and the role of two economic institutions for dealing with uncertainty:
insurance markets and stock markets. In this chapter we will
further explore how stock markets serve to allocate risk. In order to do
this, it is convenient to consider a simplified model of behavior under uncertainty.
13.1 Mean-Variance Utility
In the last chapter we examined the expected utility model of choice under
uncertainty. Another approach to choice under uncertainty is to describe
the probability distributions that are the objects of choice by a few parameters
and think of the utility function as being defined over those parameters.
The most popular example of this approach is the mean-variance
model. Instead of thinking that a consumer’s preferences depend on the
entire probability distribution of his wealth over every possible outcome,
we suppose that his preferences can be well described by considering just
a few summary statistics about the probability distribution of his wealth.
MEAN-VARIANCE UTILITY 237
Let us suppose that a random variable w takes on the values ws for
s = 1, . . . , S with probability πs. The mean of a probability distribution
is simply its average value:
μw =

S
s=1
πsws.
This is the formula for an average: take each outcome ws, weight it by the
probability that it occurs, and sum it up over all outcomes.1
The variance of a probability distribution is the average value of (w −
μw)2:
σ2w
=

S
s=1
πs(ws − μw)2.
The variance measures the “spread” of the distribution and is a reasonable
measure of the riskiness involved. A closely related measure is the standard
deviation, denoted by σw, which is the square root of the variance:
σw =

σ2w
.
The mean of a probability distribution measures its average value—what
the distribution is centered around. The variance of the distribution measures
the “spread” of the distribution—how spread out it is around the
mean. See Figure 13.1 for a graphical depiction of probability distributions
with different means and variances.
The mean-variance model assumes that the utility of a probability distribution
that gives the investor wealth ws with a probability of πs can
be expressed as a function of the mean and variance of that distribution,
u(μw, σ2w
). Or, if it is more convenient, the utility can be expressed as a
function of the mean and standard deviation, u(μw, σw). Since both variance
and standard deviation are measures of the riskiness of the wealth
distribution, we can think of utility as depending on either one.
This model can be thought of as a simplification of the expected utility
model described in the preceding chapter. If the choices that are being
made can be completely characterized in terms of their mean and variance,
then a utility function for mean and variance will be able to rank
choices in the same way that an expected utility function will rank them.
Furthermore, even if the probability distributions cannot be completely
characterized by their means and variances, the mean-variance model may
well serve as a reasonable approximation to the expected utility model.
We will make the natural assumption that a higher expected return is
good, other things being equal, and that a higher variance is bad. This
is simply another way to state the assumption that people are typically
averse to risk.
1 The Greek letter μ, mu, is pronounced “mew.” The Greek letter σ, sigma, is pronounced
“sig-ma.”
238 RISKY ASSETS (Ch. 13)
Probability Probability
0 RETURN 0 RETURN
A B
Figure
13.1
Mean and variance. The probability distribution depicted in
panel A has a positive mean, while that depicted in panel B has
a negative mean. The distribution in panel A is more “spread
out” than the one in panel B, which means that it has a larger
variance.
Let us use the mean-variance model to analyze a simple portfolio problem.
Suppose that you can invest in two different assets. One of them,
the risk-free asset, always pays a fixed rate of return, rf. This would be
something like a Treasury bill that pays a fixed rate of interest regardless
of what happens.
The other asset is a risky asset. Think of this asset as being an investment
in a large mutual fund that buys stocks. If the stock market does
well, then your investment will do well. If the stock market does poorly,
your investment will do poorly. Let ms be the return on this asset if state
s occurs, and let πs be the probability that state s will occur. We’ll use
rm to denote the expected return of the risky asset and σm to denote the
standard deviation of its return.
Of course you don’t have to choose one or the other of these assets;
typically you’ll be able to divide your wealth between the two. If you hold
a fraction of your wealth x in the risky asset, and a fraction (1 − x) in the
risk-free asset, the expected return on your portfolio will be given by
rx =

S
s=1
(xms + (1 − x)rf )πs
= x

S
s=1
msπs + (1 − x)rf

S
s=1
πs.
Since

πs = 1, we have
rx = xrm + (1 − x)rf .
MEAN-VARIANCE UTILITY 239
MEAN
RETURN
STANDARD DEVIATION
OF RETURN
r
r
r
Indifference
curves
Budget line
m Slope =
x
f
x m σ σ
r m – r
m
f
σ
Risk and return. The budget line measures the cost of achieving
a larger expected return in terms of the increased standard
deviation of the return. At the optimal choice the indifference
curve must be tangent to this budget line.
Figure
13.2
Thus the expected return on the portfolio is a weighted average of the two
expected returns.
The variance of your portfolio return will be given by
σ2x
=

S
s=1
(xms + (1 − x)rf − rx)2πs.
Substituting for rx, this becomes
σ2x
=

S
s=1
(xms − xrm)2πs
=

S
s=1
x2(ms − rm)2πs
= x2σ2m
.
Thus the standard deviation of the portfolio return is given by
σx =

x2σ2m
= xσm.
It is natural to assume that rm > rf , since a risk-averse investor would
never hold the risky asset if it had a lower expected return than the riskfree
asset. It follows that if you choose to devote a higher fraction of your
wealth to the risky asset, you will get a higher expected return, but you
will also incur higher risk. This is depicted in Figure 13.2.
240 RISKY ASSETS (Ch. 13)
If you set x = 1 you will put all of your money in the risky asset and you
will have an expected return and standard deviation of (rm, σm). If you
set x = 0 you will put all of your wealth in the sure asset and you have an
expected return and standard deviation of (rf , 0). If you set x somewhere
between 0 and 1, you will end up somewhere in the middle of the line
connecting these two points. This line gives us a budget line describing the
market tradeoff between risk and return.
Since we are assuming that people’s preferences depend only on the mean
and variance of their wealth, we can draw indifference curves that illustrate
an individual’s preferences for risk and return. If people are risk averse,
then a higher expected return makes them better off and a higher standard
deviation makes them worse off. This means that standard deviation is a
“bad.” It follows that the indifference curves will have a positive slope, as
shown in Figure 13.2.
At the optimal choice of risk and return the slope of the indifference
curve has to equal the slope of the budget line in Figure 13.2. We might
call this slope the price of risk since it measures how risk and return can
be traded off in making portfolio choices. From inspection of Figure 13.2
the price of risk is given by
p =
rm − rf
σm
. (13.1)
So our optimal portfolio choice between the sure and the risky asset could
be characterized by saying that the marginal rate of substitution between
risk and return must be equal to the price of risk:
MRS = −ΔU/Δσ
ΔU/Δμ
=
rm − rf
σm
. (13.2)
Now suppose that there are many individuals who are choosing between
these two assets. Each one of them has to have his marginal rate of substitution
equal to the price of risk. Thus in equilibrium all of the individuals’
MRSs will be equal: when people are given sufficient opportunities to trade
risks, the equilibrium price of risk will be equal across individuals. Risk is
like any other good in this respect.
We can use the ideas that we have developed in earlier chapters to examine
how choices change as the parameters of the problem change. All
of the framework of normal goods, inferior goods, revealed preference, and
so on can be brought to bear on this model. For example, suppose that an
individual is offered a choice of a new risky asset y that has a mean return
of ry, say, and a standard deviation of σy, as illustrated in Figure 13.3.
If offered the choice between investing in x and investing in y, which will
the consumer choose? The original budget set and the new budget set are
both depicted in Figure 13.3. Note that every choice of risk and return
that was possible in the original budget set is possible with the new budget
MEASURING RISK 241
σ σ
r
r
r
Budget lines
Indifference
curves
EXPECTED
RETURN
y
x
f
x y STANDARD DEVIATION
Preferences between risk and return. The asset with riskreturn
combination y is preferred to the one with combination x.
Figure
13.3
set since the new budget set contains the old one. Thus investing in the
asset y and the risk-free asset is definitely better than investing in x and
the risk-free asset, since the consumer can choose a better final portfolio.
The fact that the consumer can choose how much of the risky asset he
wants to hold is very important for this argument. If this were an “all
or nothing” choice where the consumer was compelled to invest all of his
money in either x or y, we would get a very different outcome. In the
example depicted in Figure 13.3, the consumer would prefer investing all
of his money in x to investing all of his money in y, since x lies on a
higher indifference curve than y. But if he can mix the risky asset with the
risk-free asset, he would always prefer to mix with y rather than to mix
with x.
13.2 Measuring Risk
We have a model above that describes the price of risk . . . but how do we
measure the amount of risk in an asset? The first thing that you would
probably think of is the standard deviation of an asset’s return. After all,
we are assuming that utility depends on the mean and variance of wealth,
aren’t we?
In the above example, where there is only one risky asset, that is exactly
right: the amount of risk in the risky asset is its standard deviation. But if
242 RISKY ASSETS (Ch. 13)
there are many risky assets, the standard deviation is not an appropriate
measure for the amount of risk in an asset.
This is because a consumer’s utility depends on the mean and variance of
total wealth—not the mean and variance of any single asset that he might
hold. What matters is how the returns of the various assets a consumer
holds interact to create a mean and variance of his wealth. As in the rest
of economics, it is the marginal impact of a given asset on total utility
that determines its value, not the value of that asset held alone. Just as
the value of an extra cup of coffee may depend on how much cream is
available, the amount that someone would be willing to pay for an extra
share of a risky asset will depend on how it interacts with other assets in
his portfolio.
Suppose, for example, that you are considering purchasing two assets,
and you know that there are only two possible outcomes that can happen.
Asset A will be worth either $10 or −$5, and asset B will be worth either
−$5 or $10. But when asset A is worth $10, asset B will be worth −$5 and
vice versa. In other words the values of the two assets will be negatively
correlated: when one has a large value, the other will have a small value.
Suppose that the two outcomes are equally likely, so that the average
value of each asset will be $2.50. Then if you don’t care about risk at all
and you must hold one asset or the other, the most that you would be
willing to pay for either one would be $2.50—the expected value of each
asset. If you are averse to risk, you would be willing to pay even less than
$2.50.
But what if you can hold both assets? Then if you hold one share of
each asset, you will get $5 whichever outcome arises. Whenever one asset
is worth $10, the other is worth −$5. Thus, if you can hold both assets,
the amount that you would be willing to pay to purchase both assets would
be $5.
This example shows in a vivid way that the value of an asset will depend
in general on how it is correlated with other assets. Assets that move in
opposite directions—that are negatively correlated with each other—are
very valuable because they reduce overall risk. In general the value of an
asset tends to depend much more on the correlation of its return with other
assets than with its own variation. Thus the amount of risk in an asset
depends on its correlation with other assets.
It is convenient to measure the risk in an asset relative to the risk in the
stock market as a whole. We call the riskiness of a stock relative to the
risk of the market the beta of a stock, and denote it by the Greek letter
β. Thus, if i represents some particular stock, we write βi for its riskiness
relative to the market as a whole. Roughly speaking:
βi =
how risky asset i is
how risky the stock market is
.
If a stock has a beta of 1, then it is just as risky as the market as a whole;
EQUILIBRIUM IN A MARKET FOR RISKY ASSETS 243
when the market moves up by 10 percent, this stock will, on the average,
move up by 10 percent. If a stock has a beta of less than 1, then when
the market moves up by 10 percent, the stock will move up by less than
10 percent. The beta of a stock can be estimated by statistical methods
to determine how sensitive the movements of one variable are relative to
another, and there are many investment advisory services that can provide
you with estimates of the beta of a stock.2
13.3 Counterparty Risk
Financial institutions loan money not just to individuals but to each other.
There is always the chance that one party to a loan may fail to repay the
loan, a risk known as counterparty risk.
To see how this works, imagine 3 banks, A, B, and C. Bank A owes B a
billion dollars, Bank B owes C a billion dollars, and Bank C owes bank A a
billion dollars. Now suppose that Bank A runs out of money and defaults
on its loan. Bank B is now out a billion dollars and may not be able to
pay C. Bank C, in turn, can’t pay A, pushing A even further in the hole.
This sort of effect is known as financial contagion or systemic risk. It
is a very simplified version of what happened to U.S. financial institutions
in the Fall of 2008.
What’s the solution? One way to deal with this sort of problem is to
have a “lender of last resort,” which is typically a central bank, such as
the U.S. Federal Reserve System. Bank A can go to the Federal Reserve
and request an emergency loan of a billion dollars. It now pays off its loan
from Bank B, which in turn pays Bank C, which in turn pays back Bank
A. Bank A now has sufficient assets to pay back the loan from the central
bank.
This is, of course, an overly simplified example. Initially, there was no net
debt among the three banks. If they had gotten together to compare assets
and liabilities, they would have certainly discovered that fact. However,
when assets and liabilities span thousands of financial institutions, it may
be difficult to determine net positions, which is why lenders of last resort
may be necessary.
13.4 Equilibrium in a Market for Risky Assets
We are now in a position to state the equilibrium condition for a market
with risky assets. Recall that in a market with only certain returns, we
2 The Greek letter β, beta, is pronounced “bait-uh.” For those of you who know some
statistics, the beta of a stock is defined to be βi = cov(˜ri, ˜rm)/var(˜rm). That is, βi
is the covariance of the return on the stock with the market return divided by the
variance of the market return.
244 RISKY ASSETS (Ch. 13)
saw that all assets had to earn the same rate of return. Here we have a
similar principle: all assets, after adjusting for risk, have to earn the same
rate of return.
The catch is about adjusting for risk. How do we do that? The answer
comes from the analysis of optimal choice given earlier. Recall that we
considered the choice of an optimal portfolio that contained a riskless asset
and a risky asset. The risky asset was interpreted as being a mutual fund—
a diversified portfolio including many risky assets. In this section we’ll
suppose that this portfolio consists of all risky assets.
Then we can identify the expected return on this market portfolio of
risky assets with the market expected return, rm, and identify the standard
deviation of the market return with the market risk, σm. The return on
the safe asset is rf , the risk-free return.
We saw in equation (13.1) that the price of risk, p, is given by
p =
rm − rf
σm
.
We said above that the amount of risk in a given asset i relative to the
total risk in the market is denoted by βi. This means that to measure the
total amount of risk in asset i, we have to multiply by the market risk, σm.
Thus the total risk in asset i is given by βiσm.
What is the cost of this risk? Just multiply the total amount of risk,
βiσm, by the price of risk. This gives us the risk adjustment:
risk adjustment = βiσmp
= βiσm
rm − rf
σm
= βi(rm − rf ).
Now we can state the equilibrium condition in markets for risky assets:
in equilibrium all assets should have the same risk-adjusted rate of return.
The logic is just like the logic used in Chapter 12: if one asset had a
higher risk-adjusted rate of return than another, everyone would want to
hold the asset with the higher risk-adjusted rate. Thus in equilibrium the
risk-adjusted rates of return must be equalized.
If there are two assets i and j that have expected returns ri and rj
and betas of βi and βj , we must have the following equation satisfied in
equilibrium:
ri − βi(rm − rf) = rj − βj(rm − rf ).
This equation says that in equilibrium the risk-adjusted returns on the two
assets must be the same—where the risk adjustment comes from multiplying
the total risk of the asset by the price of risk.
Another way to express this condition is to note the following. The riskfree
asset, by definition, must have βf = 0. This is because it has zero risk,
HOW RETURNS ADJUST 245
EXPECTED
RETURN
r
rf
m
1 BETA
Market line
(slope = rm – rf )
The market line. The market line depicts the combinations
of expected return and beta for assets held in equilibrium.
Figure
13.4
and β measures the amount of risk in an asset. Thus for any asset i we
must have
ri − βi(rm − rf) = rf − βf (rm − rf) = rf .
Rearranging, this equation says
ri = rf + βi(rm − rf )
or that the expected return on any asset must be the risk-free return plus
the risk adjustment. This latter term reflects the extra return that people
demand in order to bear the risk that the asset embodies. This equation is
the main result of the Capital Asset Pricing Model (CAPM), which
has many uses in the study of financial markets.
13.5 How Returns Adjust
In studying asset markets under certainty, we showed how prices of assets
adjust to equalize returns. Let’s look at the same adjustment process here.
According to the model sketched out above, the expected return on any
asset should be the risk-free return plus the risk premium:
ri = rf + βi(rm − rf ).
In Figure 13.4 we have illustrated this line in a graph with the different
values of beta plotted along the horizontal axis and different expected returns
on the vertical axis. According to our model, all assets that are held
in equilibrium have to lie along this line. This line is called the market
line.
246 RISKY ASSETS (Ch. 13)
What if some asset’s expected return and beta didn’t lie on the market
line? What would happen?
The expected return on the asset is the expected change in its price
divided by its current price:
ri = expected value of
p1 − p0
p0
.
This is just like the definition we had before, with the addition of the word
“expected.” We have to include “expected” now since the price of the asset
tomorrow is uncertain.
Suppose that you found an asset whose expected return, adjusted for
risk, was higher than the risk-free rate:
ri − βi(rm − rf ) > rf .
Then this asset is a very good deal. It is giving a higher risk-adjusted
return than the risk-free rate.
When people discover that this asset exists, they will want to buy it.
They might want to keep it for themselves, or they might want to buy it
and sell it to others, but since it is offering a better tradeoff between risk
and return than existing assets, there is certainly a market for it.
But as people attempt to buy this asset they will bid up today’s price:
p0 will rise. This means that the expected return ri = (p1 − p0)/p0 will
fall. How far will it fall? Just enough to lower the expected rate of return
back down to the market line.
Thus it is a good deal to buy an asset that lies above the market line.
For when people discover that it has a higher return given its risk than
assets they currently hold, they will bid up the price of that asset.
This is all dependent on the hypothesis that people agree about the
amount of risk in various assets. If they disagree about the expected returns
or the betas of different assets, the model becomes much more complicated.
EXAMPLE: Value at Risk
It is sometimes of interest to determine the risk of a certain set of assets.
For example, suppose that a bank holds a particular portfolio of stocks. It
may want to estimate the probability that the portfolio will fall by more
than a million dollars on a given day. If this probability is 5% then we
say that the portfolio has a “one-day 5% value at risk of $1 million.”
Typically value at risk is computed for 1 day or 2 week periods, using loss
probabilities of 1% or 5%.
The theoretical idea of VaR is attractive. All the challenges lie in figuring
out ways to estimate it. But, as financial analyst Philippe Jorion has put
it, “[T]he greatest benefit of VaR lies in the imposition of a structured
HOW RETURNS ADJUST 247
methodology for critically thinking about risk. Institutions that go through
the process of computing their VaR are forced to confront their exposure
to financial risks and to set up a proper risk management function. Thus
the process of getting to VaR may be as important as the number itself.”
The VaR is determined entirely by the probability distribution of the
value of the portfolio, and this depends on the correlation of the assets in
the portfolio. Typically, assets are positively correlated, so they all move
up or down at once. Even worse, the distribution of asset prices tends to
have “fat tails” so that there may be a relatively high probability of an
extreme price movement. Ideally, one would estimate VaR using a long
history of price movements. In practice, this is difficult to do, particularly
for new and exotic assets.
In the Fall of 2008 many financial institutions discovered that their VaR
estimates were severely flawed since asset prices dropped much more than
was anticipated. In part this was due to the fact that statistical estimates
were based on very small samples that were gathered during a stable period
of economic activity. The estimated values at risk understated the true risk
of the assets in question.
EXAMPLE: Ranking Mutual Funds
The Capital Asset Pricing Model can be used to compare different investments
with respect to their risk and their return. One popular kind of
investment is a mutual fund. These are large organizations that accept
money from individual investors and use this money to buy and sell stocks
of companies. The profits made by such investments are then paid out to
the individual investors.
The advantage of a mutual fund is that you have professionals managing
your money. The disadvantage is they charge you for managing it. These
fees are usually not terribly large, however, and most small investors are
probably well advised to use a mutual fund.
But how do you choose a mutual fund in which to invest? You want one
with a high expected return of course, but you also probably want one with
a minimum amount of risk. The question is, how much risk are you willing
to tolerate to get that high expected return?
One thing that you might do is to look at the historical performance
of various mutual funds and calculate the average yearly return and the
beta—the amount of risk—of each mutual fund you are considering. Since
we haven’t discussed the precise definition of beta, you might find it hard
to calculate. But there are books where you can look up the historical
betas of mutual funds.
If you plotted the expected returns versus the betas, you would get a
248 RISKY ASSETS (Ch. 13)
diagram similar to that depicted in Figure 13.5.3 Note that the mutual
funds with high expected returns will generally have high risk. The high
expected returns are there to compensate people for bearing risk.
One interesting thing you can do with the mutual fund diagram is to
compare investing with professional managers to a very simple strategy
like investing part of your money in an index fund. There are several
indices of stock market activity like the Dow-Jones Industrial Average, or
the Standard and Poor’s Index, and so on. The indices are typically the
average returns on a given day of a certain group of stocks. The Standard
and Poor’s Index, for example, is based on the average performance of 500
large stocks in the United States.
EXPECTED
RETURN
r
rf
m
Expected return
and β of index
fund
Market line
Expected return
and β of typical
mutual fund
1 BETA
Figure
13.5
Mutual funds. Comparing the returns on mutual fund investment
to the market line.
An index fund is a mutual fund that holds the stocks that make up such
an index. This means that you are guaranteed to get the average performance
of the stocks in the index, virtually by definition. Since holding the
average is not a very difficult thing to do—at least compared to trying to
beat the average—index funds typically have low management fees. Since
an index fund holds a very broad base of risky assets, it will have a beta
3 See Michael Jensen, “The Performance of Mutual Funds in the Period 1945–1964,”
Journal of Finance, 23 (May 1968), 389–416, for a more detailed discussion of how
to examine mutual fund performance using the tools we have sketched out in this
chapter. Mark Grinblatt and Sheridan Titman have examined more recent data
in “Mutual Fund Performance: An Analysis of Quarterly Portfolio Holdings,” The
Journal of Business, 62 (July 1989), 393–416.
SUMMARY 249
that is very close to 1—it will be just as risky as the market as a whole,
because the index fund holds nearly all the stocks in the market as a whole.
How does an index fund do as compared to the typical mutual fund?
Remember the comparison has to be made with respect to both risk and
return of the investment. One way to do this is to plot the expected return
and beta of a Standard and Poor’s Index fund, and draw the line connecting
it to the risk-free rate, as in Figure 13.5. You can get any combination of
risk and return on this line that you want just by deciding how much money
you want to invest in the risk-free asset and how much you want to invest
in the index fund.
Now let’s count the number of mutual funds that plot below this line.
These are mutual funds that offer risk and return combinations that are
dominated by those available by the index fund/risk-free asset combinations.
When this is done, it turns out that the vast majority of the riskreturn
combinations offered by mutual funds are below the line. The number
of funds that plot above the line is no more than could be expected by
chance alone.
But seen another way, this finding might not be too surprising. The stock
market is an incredibly competitive environment. People are always trying
to find undervalued stocks in order to purchase them. This means that on
average, stocks are usually trading for what they’re really worth. If that is
the case, then betting the averages is a pretty reasonable strategy—since
beating the averages is almost impossible.
Summary
1. We can use the budget set and indifference curve apparatus developed
earlier to examine the choice of how much money to invest in risky and
riskless assets.
2. The marginal rate of substitution between risk and return will have to
equal the slope of the budget line. This slope is known as the price of risk.
3. The amount of risk present in an asset depends to a large extent on its
correlation with other assets. An asset that moves opposite the direction
of other assets helps to reduce the overall risk of your portfolio.
4. The amount of risk in an asset relative to that of the market as a whole
is called the beta of the asset.
5. The fundamental equilibrium condition in asset markets is that riskadjusted
returns have to be the same.
6. Counterparty risk, which is the risk that the other side of a transaction
will not pay, can also be an important risk factor.
250 RISKY ASSETS (Ch. 13)
REVIEW QUESTIONS
1. If the risk-free rate of return is 6%, and if a risky asset is available with
a return of 9% and a standard deviation of 3%, what is the maximum rate
of return you can achieve if you are willing to accept a standard deviation
of 2%? What percentage of your wealth would have to be invested in the
risky asset?
2. What is the price of risk in the above exercise?
3. If a stock has a β of 1.5, the return on the market is 10%, and the riskfree
rate of return is 5%, what expected rate of return should this stock
offer according to the Capital Asset Pricing Model? If the expected value
of the stock is $100, what price should the stock be selling for today

Comments

Popular posts from this blog

ft

gillian tett 1