Debates about statutory interpretation typically proceed on the assumption that statutes have linguistic meanings that we can identify in the same way that we identify the meaning of utterances in ordinary conversation. But that premise is false. We identify the meaning of conversational utterances largely based on inferences about what the speaker intended to communicate. With legislatures, as now is widely recognized, there is no unitary speaker with the sort of communicative intentions that speakers in ordinary conversation possess. One might expect this recognition to trigger abandonment of the model of conversational interpretation as a framework for interpreting statutes. Instead, interpreters invent legislative intentions—purportedly “objective” ones for textualists—or purposes. With those inventions in place, judges and theorists then carry on talking about what statutes mean, or would mean to a reasonable person, as if there were a linguistic fact of the matter even in intelligibly disputed cases. But this is a deep and systematic error. Mainstream thinking about statutory interpretation needs a major reorientation. Contrary to widespread impressions, debates about statutory interpretation are not about what statutes mean as a matter of linguistic fact, but about which grounds for the attribution of an invented meaning would best promote judicial and governmental legitimacy. Having recognized that the model of conversational interpretation cannot ground claims about statutes’ meanings in disputed cases, we also need to rethink the role of legislatures and courts in a political democracy. There are limits to what legislatures can reasonably be expected to accomplish. Courts need to play the role of helpmates to the legislature, not just faithful agents. In the interpretation of statutes, linguistic intuitions should matter, but primarily for normative reasons, involving justice and fairness in the coercive application of law, and not because they reveal the legislature’s linguistically clear dictates.
This Article draws on Black American intellectual history to offer an approach to fundamental questions of constitutional theory from the standpoint of the politically excluded. Democratic constitutional theory is vexed by a series of well-known challenges rooted in the inability to justify law without democracy (“the countermajoritarian difficulty”) and the inability to justify any particular composition of the popular demos without law (“the problem of constituent power”). Under conditions of genuine egalitarian political inclusion, a constitutional conception of popular sovereignty derived primarily from the civic republican constitutional patriotism associated with Jürgen Habermas and others can resolve these challenges by providing a conceptual basis for understanding the constitutional demos as a corporate body extending across time and capable of ongoing political legitimation. Unfortunately, the constitutional conception cannot justify states, such as the United States, characterized by the persistent exclusion of some legitimate members of the demos from political institutions. The resolution to this problem can be found in an important tradition in Black American constitutional thought, beginning with Frederick Douglass, which represents American constitutional institutions as conditionally worthy of attachment in virtue of their latent normative potential. The correct conception of constitutional legitimacy for the United States combines Douglass’s insights, and those of his intellectual heirs, with those working in the tradition which Habermas represents.
The ability of social media companies to precisely target advertisements to individual users based on those users’ characteristics is changing how job opportunities are advertised. Companies like Facebook use machine learning to place their ads, and machine learning systems present risks of discrimination, which current legal doctrines are not designed to deal with. This Note will explain why it is difficult to ensure such systems do not learn discriminatory functions and why it is hard to discern what they have learned as long as they appear to be performing well on their assigned task. This Note then shows how litigation might adapt to these new systems to provide a remedy to individual plaintiffs but explains why deterrence is ill-suited in this context to prevent this discrimination from occurring in the first place. Preventing machine learning systems from learning to discriminate requires training those systems on broad, representative datasets that include protected characteristics—data that the corporations training these systems may not have. The Note proposes a proactive solution, which would involve a third party safeguarding a rich, large, nationally representative dataset of real people’s information. This third party could allow corporations like Facebook to train their machine learning systems on a representative dataset, while keeping the private data themselves out of those corporations’ hands.
The commercial speech doctrine has long weathered accusations that it is simply an attempt to reinvigorate the laissez-faire protections provided by Lochner v. New York. The modern interpretation of Lochner is generally condemnatory, arguing that its “right to contract” is a symbol of the Supreme Court’s unprincipled decision to impose its own economic preferences upon the nation. Even though Lochnerism itself has been dead for nearly 100 years, some scholars believe that the First Amendment’s commercial speech doctrine is on its way to replicating the defenses provided by the right to contract. The argument goes that because speech pervades essentially all human conduct, including market transactions, the constitutional protection of free speech could serve to invalidate any attempts at regulating the commercial sphere, just like the right to contract did. But these scholars miss a crucial point: unlike the right to contract, the First Amendment’s ambit is necessarily restricted to pure speech. Accordingly, the commercial speech doctrine simply lacks the tools to serve the same role as the right to contract. In truth, Lochner is only a boogeyman when it comes to commercial speech; although there are certainly important discussions to be had about commercial speech, they must be centered on First Amendment principles, not the ominous ghost of Lochnerism. This Note seeks to draw that line once and for all.
On June 19, 2019, the SEC released a report examining, in part, the adequacy of the accredited investor definition contained within Regulation D of the Securities Act of 1933 and soliciting public comment on potential changes to that definition. This Note argues that the current accredited investor definition, which determines who may invest in a private offering, does not adequately protect retail investors. Implemented in 1982 with fixed wealth requirements to qualify, the accredited investor definition has never been significantly revised, despite four decades of inflation that dramatically increased the percentage of households who meet the qualifications of an “accredited investor.” Market developments have also increased the risk of investing in private offerings. These risks heighten the necessity for the accredited investor definition accurately to identify a group of investors who can evaluate the merits of a private offering and sustain any potential losses. To ensure that the accredited investor definition performs that job adequately, the SEC must revise the definition to meet the needs of the modern investing landscape. Specifically, this Note proposes that the accredited investor definition should require higher income and net worth thresholds that increase with the rate of inflation and that exclude retirement accounts from their calculation.
This Essay is based on a lecture delivered on October 10, 2018 at Northwestern Pritzker School of Law as the third annual Abraham Lincoln Lecture on Constitutional Law.