Algorithmic Fairness and Social Welfare
Abstract
Algorithms are increasingly used to guide high-stakes decisions about individuals (for example, which patients to treat, or what kind of loan to offer a prospective borrower). Consequently, substantial interest has developed around defining and measuring the "fairness" of these algorithms. These definitions of fair algorithms share two features: First, they prioritize the role of a pre-defined group identity (e.g., race or gender) by focusing on how the algorithm's impact differs systematically across groups. Second, they are statistical in nature; for example, comparing false positive rates, or assessing whether group identity is independent of the decision. These notions are facially distinct from a social welfare approach to fairness, in particular one based on "veil of ignorance" thought experiments in which individuals choose how to structure society prior to the realization of their social identity. In this paper, we seek to understand and organize the relationship between these different approaches to fairness. We show that these approaches are fundamentally different and conclude by proposing a framework that nests both approaches.
BibTeX
@article{liang2024fairness,
author = {Annie Liang and Jay Lu},
title = {Algorithmic Fairness and Social Welfare},
journal = {AEA Papers and Proceedings},
volume = {114},
pages = {628--632},
year = {2024}
}