Biases in Belief Updating Within and Across Domains

People often update their beliefs by too much or too little in response to new information. One is more likely to overreact to new information when predicting what comes next, and more likely to underreact when making a judgment about underlying quality. Both biases stem from the same habit of anchoring on a dataset’s most visible features, while underweighting those that are harder to process.

RELATED

Biases in Belief Updating Within and Across Domains

Why do people sometimes overreact to new information, and other times underreact? Ask someone to forecast next quarter’s earnings using a company’s profit history, for example, and they will likely update their estimate too much in response to the most recent data point. Show the same person the same profit history and ask them to judge whether the company is fundamentally profitable or not, and they will likely update their judgement too little in response to the same data.

Previous attempts to explain these patterns have treated over and underreaction as distinct phenomena, each requiring its own model. In this paper, the authors propose a unified approach in which biases in belief updating stem from a single cognitive mechanism. They argue that people approach new information with a default perception of the world that is shaped by their past experiences. People adjust their expectations only partially, and the less attention they pay to a given feature of the new situation, the less they adjust for it.

For example, someone who assumes from experience that profits tend to be consistent over time will treat the most recent profit figure as highly informative about future profits and therefore weigh it heavily in a forecast. By contrast, when the same person judges the company’s underlying quality, their assumption that profits tend to be consistent over time will lead them to believe that each new profit observation adds little independent information about the company’s overall performance and therefore underreact to a recent datapoint when judging the company’s underlying quality.

The authors put this theory to the test using a series of controlled experiments. In their main study, participants are shown 30 months of profit data from a randomly selected company and asked to either to forecast future profits or judge the company’s underlying quality.

In subsequent experiments, the authors test whether directly shifting participants’ attention from one feature of the data to another (by asking participants to write down examples of real-world data that stay consistent over time versus data that fluctuate independently from one period to the next) changes their answers. In each case, the authors measure how far participants’ responses deviate from those of a hypothetical purely rational reader of the same data. They find the following:

  • Participants who are shown the same data tend to overreact when they are asked to forecast future profits and underreact when asked to judge the company’s underlying profitability. This gap is large, statistically significant, and mirrors patterns documented among professional forecasters.
  • This gap appears driven by a default assumption of high persistence. In trials where participants are presented with data that are more persistent, i.e., where each month’s profits closely track the last, participants asked to forecast underreact more to the most recent reading, while those asked to judge underlying quality overreact more. In trials where the data fluctuate more, this pattern reverses.
  • The authors design pairs of scenarios that combine a given level of data persistence with a corresponding magnitude of latest reading such that a rational reader would update their beliefs by the same amount in both. Participants’ answers track the size of the latest reading but largely ignore differences in persistence. This confirms that persistence generates larger errors because people do not pay attention to it, producing a pattern of overreacting to weak signals and underreacting to strong signals.
  • When participants are nudged to pay more attention to persistence in the follow-up experiment, errors related to persistence fall. Errors related to the size of the latest reading rise by a corresponding amount, however, suggesting that shifting attention to one feature moves mistakes to another.
  • In trials where the data are highly persistent and participants are shown a large new reading, the two factors interact in a way that reverses the expected bias. A large reading in a highly persistent series is a strong signal, and so a rational reader would discount it heavily. Instead, participants largely ignore the persistence and overreact to the large reading at face value. Insensitivity to one feature does not produce one isolated error, it generates excess sensitivity to another.

These patterns are not confined to the laboratory. Among real-world professional forecasters, overreaction is largest and most erratic when predicting variables whose historical data departs most from the default assumption of high persistence. This is exactly what the mechanism here predicts, and suggests it is operating when professionals forecast inflation, GDP, and financial returns.