Optimal Learning From Multiple Information Sources (joint with Xiaosheng Mu and Vasilis Syrgkanis)

Last updated: March 8, 2017

 

Abstract. Decision-makers often learn by acquiring information from distinct sources that possibly provide complementary information.  We consider a decision-maker who sequentially samples from a finite set of Gaussian signals, and wants to predict a persistent multi-dimensional state at an unknown final period. What signal should he choose to observe in each period? Related problems about optimal experimentation and dynamic learning tend to have solutions that can only be approximated or implicitly characterized. In contrast, we find that in our problem, the dynamically optimal path of signal acquisitions generically: (1) eventually coincides at every period with the myopic path of signal acquisitions, and (2) eventually achieves "total optimality," so that at every large period, the decision-maker will not want to revise his previous signal acquisitions, even if given this opportunity. In special classes of environments that we describe, these properties attain not only eventually, but from period 1. Finally, we characterize the asymptotic frequency with which each signal is chosen, and how this depends on primitives of the informational environment.

DOWNLOAD PDF

 

BACK