Recent developments at the U.S. Supreme Court have rekindled debates over textualism. Missing from the conversation is a discussion of the courts that decide the vast majority of statutory interpretation cases in the United States—state courts. This Article uses supervised machine learning to conduct the first-ever empirical study of the statutory interpretation methods used by state supreme courts. In total, this study analyzes over 44,000 opinions from all fifty states from 1980 to 2019.
This Article establishes several key descriptive findings. First, since the 1980s, textualism has risen rapidly in state supreme court opinions. Second, this rise is primarily attributable to increased reliance on the statute’s plain meaning. Third, state supreme courts use tools of statutory interpretation often associated with textualism—plain meaning, dictionaries, and linguistic canons—much more often than legislative history or consequences. And fourth, there is dramatic variation in textualism use across states.
This Article also conducts several exploratory analyses investigating whether ideology and judicial selection are associated with the use of textualist tools. I find that conservative justices invoke textualist reasoning slightly more often. And, while the estimates are noisy, the findings also indicate this ideological gap is primarily explained by conservatives’ heightened tendency to invoke the plain meaning rule. As for judicial selection, cross-state evidence suggests that justices appointed by governors and legislatures use textualism more frequently than those selected via election or merit commission.
These findings add empirical discipline to ongoing debates about ideology and textualism. They also reframe priorities for future research on the plain meaning rule, textualism in general, and judicial selection’s relationship to statutory interpretation. More broadly, they illustrate how natural language processing methods can help statutory interpretation scholarship expand its focus and study state courts.