People often update their beliefs by too much or too little in response to new information. One is more likely to overreact to new information when predicting what comes next, and more likely to underreact when making a judgment about underlying quality. Both biases stem from the same habit of anchoring on a dataset’s most visible features, while underweighting those that are harder to process.
Why do people sometimes overreact to new information, and other times underreact? Ask someone to forecast next quarter’s earnings using a company’s profit history, for example, and they will likely update their estimate too much in response to the most recent data point. Show the same person the same profit history and ask them to judge whether the company is fundamentally profitable or not, and they will likely update their judgement too little in response to the same data.
Previous attempts to explain these patterns have treated over and underreaction as distinct phenomena, each requiring its own model. In this paper, the authors propose a unified approach in which biases in belief updating stem from a single cognitive mechanism. They argue that people approach new information with a default perception of the world that is shaped by their past experiences. People adjust their expectations only partially, and the less attention they pay to a given feature of the new situation, the less they adjust for it.
For example, someone who assumes from experience that profits tend to be consistent over time will treat the most recent profit figure as highly informative about future profits and therefore weigh it heavily in a forecast. By contrast, when the same person judges the company’s underlying quality, their assumption that profits tend to be consistent over time will lead them to believe that each new profit observation adds little independent information about the company’s overall performance and therefore underreact to a recent datapoint when judging the company’s underlying quality.
The authors put this theory to the test using a series of controlled experiments. In their main study, participants are shown 30 months of profit data from a randomly selected company and asked to either to forecast future profits or judge the company’s underlying quality.
In subsequent experiments, the authors test whether directly shifting participants’ attention from one feature of the data to another (by asking participants to write down examples of real-world data that stay consistent over time versus data that fluctuate independently from one period to the next) changes their answers. In each case, the authors measure how far participants’ responses deviate from those of a hypothetical purely rational reader of the same data. They find the following:
These patterns are not confined to the laboratory. Among real-world professional forecasters, overreaction is largest and most erratic when predicting variables whose historical data departs most from the default assumption of high persistence. This is exactly what the mechanism here predicts, and suggests it is operating when professionals forecast inflation, GDP, and financial returns.