Overabundant Information and Learning Traps, joint with Xiaosheng Mu

Last updated: Oct. 23, 2017

Abstract. We study a model of sequential learning, where agents choose what kind of information to acquire from a large, fixed set of Gaussian signals with arbitrary correlation. In each period, a short-lived agent acquires a signal from this set of sources to maximize an individual objective. All signal realizations are public. We study the community's asymptotic speed of learning, and characterize the set of sources observed in the long run. A simple property of the correlation structure guarantees that the community learns as fast as possible, and moreover that a "best" set of sources is eventually observed. When the property fails, the community may get stuck in an inefficient set of sources and learn (arbitrarily) slowly. There is a specific, diverse set of possible final outcomes, which we characterize.

DOWNLOAD PDF

 

Dynamic Information Acquisition from Multiple Sources, joint with Xiaosheng Mu and Vasilis Syrgkanis

Last updated: Aug. 14, 2017

Abstract. Consider a decision-maker who dynamically acquires Gaussian signals that are related by a completely flexible correlation structure. Such a setting describes information acquisition from news sources with correlated biases, as well as aggregation of complementary information from specialized sources. We study the optimal sequence of information acquisitions. Generically, myopic signal acquisitions turn out to be optimal at sufficiently late periods, and in classes of informational environments that we describe, they are optimal from period 1. These results hold independently of the decision problem and its (endogenous or exogenous) timing.  We apply these results to characterize dynamic information acquisition in games.

DOWNLOAD PDF  /  ONLINE APPENDIX

 

Games of Incomplete Information Played By Statisticians

Last updated: Aug. 10, 2016

Abstract. The common prior assumption is a convenient restriction on beliefs in games of incomplete information, but conflicts with evidence that players publicly disagree in many economic environments. This paper proposes a foundation for heterogeneous beliefs in games, in which disagreement arises not from different information, but from different interpretations of common information. A key assumption imposes that while players may interpret data in different ways, they have common certainty in the predictions of a class of interpretations. The main results characterize which rationalizable actions and Nash equilibria can be predicted when agents observe a finite quantity of data, and how much data is needed to predict different solutions. This quantity, which I refer to as the robustness of the solution, is shown to depend crucially on the degree of strictness of the solution and the "complexity" of inference from data.

DOWNLOAD PDF

 

Inference of Preference Heterogeneity from Choice Data (R&R at Journal of Economic Theory)

Last updated: Oct. 4, 2016

Abstract. Suppose that an analyst observes inconsistent choices from a decision maker. Can the analyst determine whether this inconsistency arises from choice error (imperfect maximization of a single preference) or from preference heterogeneity (deliberate maximization of multiple preferences)? I model choice data as generated from context-dependent preferences, where contexts vary across observations, and the decision maker errs with small probability in each observation. I show that (a) simultaneously minimizing the number of inferred preferences and the number of unexplained observations can exactly recover the correct number of preferences with high probability; (b) simultaneously minimizing the richness of the set of preferences and the number of unexplained observations can exactly recover the choice implications of the decision maker's true preferences with high probability. These results illustrate that selection of simple models, appropriately defined, is a useful approach for recovery of stable features of preference.

DOWNLOAD PDF

 

The Theory is Predictive, But Is It Complete? An Application to Human Perception of Randomness, joint with Jon Kleinberg and Sendhil Mullainathan

Last updated: July 15, 2017

Abstract. When testing a theory, we should ask not just whether its predictions match what we see in the data, but also about its "completeness": how much of the predictable variation in the data does the theory capture? Defining completeness is conceptually challenging, but we show how methods based on machine learning can provide tractable measures of completeness. We also identify a model domain -- the human perception and generation of randomness -- where measures of completeness can be feasibly analyzed; from these measures we discover there is significant structure in the problem that existing theories have yet to capture.

DOWNLOAD PDF

 

Predicting and Understanding Initial Play, joint with Drew Fudenberg

Last updated: Nov. 22, 2017

Abstract. We take a machine learning approach to the problem of predicting initial play in strategic-form games. We predict game play data from previous laboratory experiments, and also a new data set of 200 games with randomly distributed payoffs that were played on Mechanical Turk. We consider two approaches, with the goals of uncovering new regularities in play and improving the predictions of existing theories. First, we use machine learning algorithms to train prediction rules based on a large set of game features. Examination of the games where our algorithm predicts play correctly but the existing models do not leads us to introduce a risk aversion parameter, which we find significantly improves predictive accuracy. Second, we augment existing empirical models by using play in a set of training games to predict how the models' parameters vary across new games. This modified approach generates better out-of-sample predictions, and provides insight into how and why the parameters vary. These methodologies are not special to the problem of predicting play in games, and may be useful in other contexts.

DOWNLOAD PDF