top of page

Research.

Publications

Big data and discrimination. (2019), with JL Spiess - The University of Chicago Law Review.

  • Google Scholar

The ability to distinguish between people in setting the price of credit is often constrained by legal rules that aim to prevent discrimination. These legal require- ments have developed focusing on human decision-making contexts, and so their effectiveness is challenged as pricing increasingly relies on intelligent algorithms that extract information from big data. In this Essay, we bring together existing le- gal requirements with the structure of machine-learning decision-making in order to identify tensions between old law and new methods and lay the ground for legal solutions. We argue that, while automated pricing rules provide increased transpar- ency, their complexity also limits the application of existing law. Using a simulation exercise based on real-world mortgage data to illustrate our arguments, we note that restricting the characteristics that the algorithm is allowed to use can have a limited effect on disparity and can in fact increase pricing gaps. Furthermore, we argue that there are limits to interpreting the pricing rules set by machine learning that hinders the application of existing discrimination laws. We end by discussing a framework for testing discrimination that evaluates algorithmic pricing rules in a controlled environment. Unlike the human decision-making context, this framework allows for ex ante testing of price rules, facilitating comparisons between lenders.

The Input Fallacy. (2022) - Minnesota Law Review, forthcoming.

  • Google Scholar

Algorithmic credit pricing threatens to discriminate against protected groups. Traditionally, fair lending law has addressed such threats by scrutinizing inputs. But input scrutiny has become a fallacy in the world of algorithms. Using a rich dataset of mortgages, I simulate algorithmic credit pricing and demonstrate that input scrutiny fails to address discrimination concerns and threatens to create an algorithmic myth of colorblindness. The ubiquity of correlations in big data combined with the flexibility and complexity of machine learning means that one cannot rule out the consideration of protected characteristics, such as race, even when one formally excludes them. Moreover, using inputs that include protected characteristics can in fact reduce disparate outcomes. Nevertheless, the leading approaches to discrimination law in the algorithmic age continue to commit the input fallacy. These approaches suggest that we exclude protected characteristics and their proxies and limit algorithms to pre-approved inputs. Using my simulation exercise, I refute these approaches with new analysis. I demonstrate that they fail on their own terms, are unfeasible, and overlook the benefits of accurate prediction. These failures are particularly harmful to marginalized groups and individuals because they threaten to perpetuate their historical exclusion from credit and, thus, from a central avenue to greater prosperity and equality. I argue that fair lending law must shift to outcome-focused analysis. When it is no longer possible to scrutinize inputs, outcome analysis provides the only way to evaluate whether a pricing method leads to impermissible disparities. This is true not only under the legal doctrine of disparate impact, which has always cared about outcomes, but also under the doctrine of disparate treatment, which has historically avoided examining disparate outcomes. Now, disparate treatment too can no longer rely on input scrutiny and must be considered through the lens of outcomes. I propose a new framework that regulatory agencies, such as the Consumer Financial Protection Bureau, can adopt to measure disparities and fight discrimination. This proposal charts an empirical course for antidiscrimination law in fair lending and also carries promise for other algorithmic contexts, such as criminal justice and employment.

  • Google Scholar

Financial disclosures no longer enjoy the immunity from criticism they once had. While disclosures remain the hallmark of numerous areas of regulation, there is increasing skepticism as to whether disclosures are understood by consumers and do in fact improve consumer welfare. Debates on the virtues of disclosures overlook the process by which regulators continue to mandate disclosures. This article fills this gap by analyzing the testing of proposed disclosures, which is an increasingly popular way for regulators to establish the benefits of disclosure. If the testing methodology is misguided then the premise on which disclosures are adopted is flawed, leaving consumers unprotected. This article focuses on two recent major testing efforts: the European Union’s testing of fund disclosure and the Consumer Financial Protection Bureau’s testing of the integrated mortgage disclosures, which will go into effect on August 1, 2015. Despite the substantial resources invested in these quantitative studies, regulation based on study results is unlikely to benefit consumers since the testing lacks both external and internal validity. The generalizability of the testing is called into question since the isolated conditions of testing overlook the reality of financial transactions. Moreover, the testing method mistakenly assumes a direct link between comprehension and improved decisions, and so erroneously uses comprehension tests. As disclosure becomes more central to people’s daily lives, from medical decision aids to nutritional labels, greater attention should be given to the testing policies that justify their implementation. This article proposes several ways to improve the content and design of quantitative studies as we enter the era of testing.

Explanation< Justification: GDPR and the Perils of Privacy. (2019) with J Simons. - Journal of Law & Innovation.

  • Google Scholar

The European Union’s General Data Protection Regulation (GDPR) is the most comprehensive legislation yet enacted to govern algorithmic decision-making. Its reception has been dominated by a debate about whether it contains an individual right to an explanation of algorithmic decision-making. We argue that this debate is misguided in both the concepts it invokes and in its broader vision of accountability in modern democracies. It is justification that should guide approaches to governing algorithmic decision-making, not simply explanation. The form of justification – who is justifying what to whom – should determine the appropriate form of explanation. This suggests a sharper focus on systemic accountability, rather than technical explanations of models to isolated, rights-bearing individuals. We argue that the debate about the governance of algorithmic decision-making is hampered by its excessive focus on privacy. Moving beyond the privacy frame allows us to focus on institutions rather than individuals and on decision-making systems rather than the inner workings of algorithms. Future regulatory provisions should develop mechanisms within modern democracies to secure systemic accountability over time in the governance of algorithmic decision-making systems.

  • Google Scholar

The pricing of credit is changing. Traditionally, lenders priced consumer credit by using a small set of borrower and loan characteristics, sometimes with the assistance of loan officers. Today, lenders increasingly use big data and advanced prediction technologies, such as machine-learning, to set the terms of credit. These modern underwriting practices could increase prices for protected groups, potentially giving rise to violations of anti-discrimination laws. What is not new, however, is the concern that personalized credit pricing relies on characteristics or inputs that reflect preexisting discrimination or disparities. Fair lending law has traditionally addressed this concern through input scrutiny, either by limiting the consideration of protected characteristics or by attempting to isolate inputs that cause disparities.

Fiduciary Law in Financial Regulation. HE Jackson, TB Gillis - 2018

  • Google Scholar

This chapter explores the application of fiduciary duties to regulated financial firms and financial services. At first blush, the need for such a chapter might strike some as surprising in that fiduciary duties and systems of financial regulation can be conceptualized as governing distinctive and non-overlapping spheres: Fiduciary duties police private activity through open-ended, judicially defined standards imposed on an ex post basis, whereas financial regulations set largely mandatory, ex ante obligations for regulated entities under supervisory systems established in legislation and implemented through expert administrative agencies. Yet, as we document in this chapter, fiduciary duties often do overlap with systems of financial regulation. In many regulatory contexts, fiduciary duties arise as a complement to, or sometimes substitute for, other mechanisms of financial regulation. Moreover, the interactions between fiduciary duties and systems of financial regulation generate a host of recurring and challenging interpretative issues. Our motivation in writing this chapter is to explore the reasons why fiduciary duties arise so frequently in the field of financial regulation, and then to provide a structured account of how the principles of fiduciary duties interact with the more rule-based legal requirements that characterize financial regulation. As grist for this undertaking we focus on a set of roughly two dozen judicial decisions and administrative rulings to illustrate our claims.

Working Papers.
  • Google Scholar

When machine-learning algorithms are used in high-stakes decisions, we want to ensure that their deployment leads to fair and equitable outcomes. This concern has motivated a fast-growing literature that focuses on diagnosing and addressing disparities in machine predictions. However, many machine predictions are deployed to assist in decisions where a human decision-maker retains the ultimate decision authority. In this article, we therefore consider in a formal model and in a lab experiment how properties of machine predictions affect the resulting human decisions. In our formal model of statistical decision-making, we show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions. Specifically, we document that excluding information about protected groups from the prediction may fail to reduce, and may even increase, ultimate disparities. In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions. While our concrete theoretical results rely on specific assumptions about the data, algorithm, and decision-maker, and the experiment focuses on a particular prediction task, our findings show more broadly that any study of critical properties of complex decision systems, such as the fairness of machine-assisted human decisions, should go beyond focusing on the underlying algorithmic predictions in isolation.

Incomplete Contracts and Future Data Usage. with J Frankenreiter, and D Svirsky

  • Google Scholar

Most major jurisdictions require websites to provide customers with privacy policies. For consumers, a privacy policy's most important function is to provide them with a description of the online service provider's current privacy practices. We argue that these policies also serve a second, often-overlooked function: they allocate residual data usage rights to online services or consumers, including the power to decide whether a service can modify its privacy practices and use consumer data in novel ways. We further argue that a central feature of the E.U.'s General Data Protection Regulation (GDPR), one of the most comprehensive and far-reaching privacy regulatory regimes, is to restrict privacy policies from allocating broad rights for future data usage to service providers. We offer a theoretical explanation for this type of regulatory intervention by adapting standard models of incomplete contracts to privacy policies. We then use the model to consider how U.S. firms reacted to the GDPR. We show that U.S. websites with E.U. exposure were more likely to change their U.S. privacy policies to drop any mention of a policy modification procedure. Among websites that do not have E.U. exposure, we see the opposite trend and discuss how to understand these changes in the context of an incomplete contracts model

Work in Progress.

Sex and Startups. with J Frankenreiter, and E Talley.

Consumption Responses to Mortgage Payments. with J Beshears and K Vira.

bottom of page