« Back to Results

Empirical Frontiers in Discrimination Research

Paper Session

Saturday, Jan. 3, 2026 8:00 AM - 10:00 AM (EST)

Marriott Philadelphia Downtown
Hosted By: Econometric Society
  • Chair: Evan Rose, University of Chicago

What Do Names Reveal? Impacts of Blind Evaluations on Composition and Quality

Haruka Uchida
,
University of Chicago

Abstract

Concealing candidate identities during evaluations, or “blinding”, is often proposed as a tool for combatting discrimination. I study how blinding impacts candidate selection and quality, and the forms of discrimination driving these effects. I conduct a natural field experiment at an academic conference, running each submitted paper through both blind and non-blind review. Four years after the experiment, I collect proxy measures of paper quality—citations and publication statuses—for each paper and link it to the experimental data. Blinding significantly reduces scores for traditionally high-scoring groups, and consequently alters the composition of applicants who are accepted to the conference. Despite these compositional changes, blinding does not worsen the conference’s ability to select high-quality papers. I develop a model of evaluator discrimination that allows me to rationalize these effects and decompose non-blind disparities into two distinct forms of discrimination: accurate statistical discrimination and bias.

Discrimination Preferences

Nickolas Gagnon
,
Aarhus University
Daniele Nosenzo
,
Aarhus University

Abstract

We reconsider discrimination preferences through moral lenses and conduct experiments to systematically investigate these preferences using quota-based representative UK samples. Moving beyond the aggregate, we evaluate the frequency of individual preferences for and against taste- and statistical-based discrimination across three domains---ethnicity, gender, and LGBTQ+ status. Using over 60,000 anonymous decisions affecting how workers are paid from more than 3,500 individuals, we document that most individuals prefer to engage in at least one type of discrimination, that there is substantial heterogeneity in preferences, and that the existence of multiple preferences changes our understanding of why individuals engage or not in discrimination. Among others, we examine how preferences are linked to socio-demographic characteristics, politics, support for policies, and the gender wage gaps, evaluate how they correlate across domains, study underlying redistributive principles and effects of wage transparency, and complement our findings with a survey about workplaces.

Discriminatory Discretion: Theory and Evidence From Use of Pretrial Algorithms

Diag Davenport
,
Princeton University

Abstract

This article examines the biased usage of an algorithm, an understudied topic relative to the massive body of research that examines how algorithms may be biased. Using highly detailed administrative data, I study a large sample of high-stakes decision makers—New Jersey police and judicial officers—who are armed with a freely available algorithm. When officers consider requesting a warrant for a defendant’s detention, they have complete discretion over whether to consult an algorithmic risk score that predicts a defendant’s likelihood of failing to appear in court as well as the defendant’s likelihood of being rearrested if released. I find that officers frequently choose not to look at information that is free, simple, and non-binding. Moreover, the choice of whether to view the algorithm is far from random. Controlling for underlying risk, fficers are less likely to consult the risk score for black defendants (relative to white defendants) accused of lesser crimes, but the relationship is reversed for severe crimes. Then, once the risk scores are seen, officers are more likely to issue warrants for black defendants, again controlling for risk. The black-white warrant gap is smallest for the most and least risky defendants, and grows for more moderate-risk defendants. I organize these empirical facts in a novel taste-based discrimination framework in which agents are averse to certain groups, but also averse to appearing prejudiced. The key prediction of this avoidant animus is that agents will discriminate more in situations that are more ambiguous in an effort to curate their preferred image. I conclude by discussing policy implications for prejudice reduction, automation, and the discretionary use of decision aids.

Selecting for Diverse Talent: Theory and Evidence

Kadeem Noray
,
Massachusetts Institute of Technology

Abstract

We hypothesize that the complexity of selecting personnel in a way that jointly optimizes for talent and diversity impedes organizations from meeting their diversity goals. To formalize this, we prove that maximizing cohort diversity is computationally complex (i.e, NP-Hard) and incorporate this complexity into a selection model by adding computational costs. To test the model’s predictions, we construct an algorithm to estimate the diversity-talent frontier, which we apply to data from a scholarship and talent investment program. We find that shortlisted cohorts could have been 13% (19.6%) more diverse (more talented) without reducing talent (diversity). We also show that the program selected a significantly more diverse and talented cohort after we provided them with a frontier estimate. We conclude by using program data to demonstrate how the frontier estimation procedure can be used to evaluate the efficacy of alternative screening approaches. This reveals that if the program had screened on IQ, they would have significantly reduced diversity and overlooked many of the most talented applicants.
JEL Classifications
  • J2 - Demand and Supply of Labor
  • J7 - Labor Discrimination