本文回顾了“行为经济学”的发展史及最新发展方向,作者Justin Fox是金融畅销书作家,著有《The Myth of the Rational Market》(见[相关阅读])。
[相关阅读]
- Predictably Irrational [中英双语]
- Kahneman Daniel. Thinking, Fast and Slow [思考,快与慢]
- Nudge: Improving Decisions About Health, Wealth, and Happiness
- 我从Daniel Kahneman《思考,快与慢》中得到的6条宝贵经验
- Blink 好书推荐!英文原版
- Judgment in Managerial Decision Making
- The Myth of the Rational Market
-
Keeping It Simple_Financial Literacy and Rules of Thumb.pdf
(230.38 KB)
-
Kahneman_Conditions for Intuitive Expertise.pdf
(337.48 KB)
-
Optimal Versus Naive Diversification_How Inefficient is the 1_N Portfolio Strategy.zip
(243.84 KB)
本附件包括:- Optimal Versus Naive Diversification_How Inefficient is the 1_N Portfolio Strategy.pdf
From “Economic Man” to Behavioral Economics
Justin Fox, Harvard Business Review, May 2015 Issue
When we make decisions, we make mistakes. We all know this from personal experience, of course. But just in case we didn’t, a seemingly unending stream of experimental evidence in recent years has documented the human penchant for error. This line of research—dubbed heuristics and biases, although you may be more familiar with its offshoot, behavioral economics—has become the dominant academic approach to understanding decisions. Its practitioners have had a major influence on business, government, and financial markets. Their books—Predictably Irrational; Thinking, Fast and Slow; and Nudge(见[相关阅读]), to name three of the most important—have suffused popular culture.
So far, so good. This research has been enormously informative and valuable. Our world, and our understanding of decision making, would be much poorer without it.
It is not, however, the only useful way to think about making decisions. Even if you restrict your view to the academic discussion, there are three distinct schools of thought. Although heuristics and biases is currently dominant, for the past half century it has interacted with and sometimes battled with the other two, one of which has a formal name—decision analysis—and the other of which can perhaps best be characterized as demonstrating that we humans aren’t as dumb as we look.
Adherents of the three schools have engaged in fierce debates, and although things have settled down lately, major differences persist. This isn’t like David Lodge’s aphorism about academic politics being so vicious because the stakes are so small. Decision making is important, and decision scholars have had real influence.
This article briefly tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.
The Rational Revolution
During World War II statisticians and others who knew their way around probabilities (mathematicians, physicists, economists) played an unprecedented and crucial role in the Allied effort. They used analytical means—known as operational research in the UK and operations research on this side of the Atlantic—to improve quality control in manufacturing, route ships more safely across the ocean, figure out how many pieces antiaircraft shells should break into when they exploded, and crack the Germans’ codes.
After the war hopes were high that this logical, statistical approach would transform other fields. One famous product of this ambition was the nuclear doctrine of mutual assured destruction. Another was decision analysis, which in its simplest form amounts to (1) formulating a problem, (2) listing the possible courses of action, and (3) systematically assessing each option. Historical precedents existed—Benjamin Franklin had written in the 1770s of using a “Moral or Prudential Algebra” to compare options and make choices. But by the 1950s there was tremendous interest in developing a standard approach to weighing options in an uncertain future.
The mathematician John von Neumann, who coined the term mutual assured destruction, helped jump-start research into decision making with his notion of “expected utility.” As outlined in the first chapter of his landmark 1944 book Theory of Games and Economic Behavior, written with the economist Oskar Morgenstern, expected utility is what results from combining imagined events with probabilities. Multiply the likelihood of a result against the gains that would accrue, and you get a number, expected utility, to guide your decisions.
It’s seldom that simple, of course. Von Neumann built his analysis around the game of poker, in which potential gains are easily quantifiable. In lots of life decisions, it’s much harder. And then there are the probabilities: If you’re uncertain, how are you supposed to know what those are?
The winning answer was that there is no one right answer—everybody has to wager a guess—but there is one correct way to revise probabilities as new information comes in. That is what has become known as Bayesian statistics, a revival and advancement of long-dormant ideas (most of them the work not of the English reverend Thomas Bayes but of the French mathematical genius Pierre-Simon Laplace) by a succession of scholars starting in the 1930s. For the purposes of storytelling simplicity I’ll mention just one: Leonard Jimmie Savage, a statistics professor whose 1954 book The Foundations of Statistics laid out the rules for changing one’s probability beliefs in the face of new information.
One early and still-influential product of this way of thinking is the theory of portfolio selection, outlined in 1952 by Savage’s University of Chicago student Harry Markowitz, which advised stock pickers to estimate both the expected return on a stock and the likelihood that their estimate was wrong. Markowitz won a Nobel prize for this in 1990.
The broader field of decision analysis began to come together in 1957, when the mathematician Howard Raiffa arrived at Harvard with a joint appointment in the Business School and the department of statistics. He soon found himself teaching a statistics course for business students with Robert Schlaifer, a classics scholar and fast learner who in the postwar years taught pretty much whatever needed teaching at HBS. The two concluded that the standard statistics fare of regressions and P values wasn’t all that useful to future business leaders, so they adopted a Bayesian approach. Before long what they were teaching was more decision making than statistics. Raiffa’s decision trees, with which students calculated the expected value of the different paths available to them, became a staple at HBS and the other business schools that emulated this approach.
The actual term “decision analysis,” though, was coined by Ronald Howard, an MIT electrical engineer and an expert in statistical processes who had studied with some of the leading figures in wartime operations research at MIT and crossed paths with Raiffa in Cambridge. While visiting Stanford for the 1964–1965 academic year, Howard was asked to apply the new decision-making theories to a nuclear power plant being contemplated at General Electric’s nuclear headquarters, then located in San Jose. He combined expected utility and Bayesian statistics with computer modeling and engineering techniques into what he dubbed decision analysis and some of his followers call West Coast decision analysis, to distinguish it from Raiffa’s approach. Howard and Raiffa were honored as the two founding fathers of the field at its 50th-anniversary celebration last year.
Irrationality’s Revenge
Almost as soon as von Neumann and Morgenstern outlined their theory of expected utility, economists began adopting it not just as a model of rational behavior but as a description of how people actually make decisions. “Economic man” was supposed to be a rational creature; since rationality now included assessing probabilities in a consistent way, economic man could be expected to do that, too. For those who found this a bit unrealistic, Savage and the economist Milton Friedman wrote in 1948, the proper analogy was to an expert billiards player who didn’t know the mathematical formulas governing how one ball would carom off another but “made his shots as if he knew the formulas.”
Somewhat amazingly, that’s where economists left things for more than 30 years. It wasn’t that they thought everybody made perfect probability calculations; they simply believed that in free markets, rational behavior would usually prevail.
The question of whether people actually make decisions in the ways outlined by von Neumann and Savage was thus left to the psychologists. Ward Edwards was the pioneer, learning about expected utility and Bayesian methods from his Harvard statistics professor and writing a seminal 1954 article titled “The Theory of Decision Making” for a psychology journal. This interest was not immediately embraced by his colleagues—Edwards was dismissed from his first job, at Johns Hopkins, for focusing too much on decision research. But after a stint at an Air Force personnel research center, he landed at the University of Michigan, a burgeoning center of mathematical psychology. Before long he lured Jimmie Savage to Ann Arbor and began designing experiments to measure how well people’s probability judgments followed Savage’s axioms.
A typical Edwards experiment went like this: Subjects were shown two bags of poker chips—one containing 700 red chips and 300 blue chips, and the other the opposite. Subjects took a few chips out of a random bag and then estimated the likelihood that they had the mostly blue bag or the mostly red one.
Say you got eight red chips and four blue ones. What’s the likelihood that you had the predominantly red bag? Most people gave an answer between 70% and 80%. According to Bayes’ Theorem, the likelihood is actually 97%. Still, the changes in subjects’ probability assessments were “orderly” and in the correct direction, so Edwards concluded in 1968 that people were “conservative information processors”—not perfectly rational according to the rules of decision analysis, but close enough for most purposes.
In 1969 Daniel Kahneman, of the Hebrew University of Jerusalem, invited a colleague who had studied with Edwards at the University of Michigan, Amos Tversky, to address his graduate seminar on the practical applications of psychological research. Tversky told the class about Edwards’s experiments and conclusions. Kahneman, who had not previously focused on decision research, thought Edwards was far too generous in his assessment of people’s information-processing skills, and before long he persuaded Tversky to undertake a joint research project. Starting with a quiz administered to their fellow mathematical psychologists at a conference, the pair conducted experiment after experiment showing that people assessed probabilities and made decisions in ways systematically different from what the decision analysts advised.
“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction,” they wrote in 1973. “They rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.”
Heuristics are rules of thumb—decision-making shortcuts. Kahneman and Tversky didn’t think relying on them was always a bad idea, but they focused their work on heuristics that led people astray. Over the years they and their adherents assembled a long list of these decision-making flaws—the availability heuristic, the endowment effect, and so on.
As an academic movement, this was brilliantly successful. Kahneman and Tversky not only attracted a legion of followers in psychology but also inspired a young economist, Richard Thaler, and with help from him and others came to have a bigger impact on the field than any outsider since von Neumann. Kahneman won an economics Nobel in 2002—Tversky had died in 1996 and thus couldn’t share the prize—and the heuristics-and-biases insights relating to money became known as behavioral economics. The search for ways in which humans violate the rules of rationality remains a rich vein of research for scholars in multiple fields.

419 个论坛币


雷达卡







京公网安备 11010802022788号







