Using only a series of images of a person’s face and publicly available software, it is now possible to insert the person’s likeness into a video and show them saying or doing almost anything. This “deepfake” technology has permitted an explosion of political satire and, especially, fake pornography. Several states have already passed laws regulating deepfakes, and more are poised to do so. This Article presents three novel empirical studies that assess public attitudes toward this new technology. In our main study, a representative sample of the U.S. adult population perceived nonconsensually created pornographic deepfake videos as extremely harmful and overwhelmingly wanted to impose criminal sanctions on those creating them. Labeling pornographic deepfakes as fictional did not mitigate the videos’ perceived wrongfulness. In contrast, participants considered nonpornographic deepfakes substantially less wrongful when they were labeled as fictional or did not depict inherently defamatory conduct (such as illegal drug use). A follow-up study showed that people sought to impose both civil and criminal liability on deepfake creation. A second follow-up showed that people judge the creation and dissemination of deepfake pornography to be as harmful as the dissemination of traditional nonconsensual pornography—otherwise known as revenge pornography—and to be slightly more morally blameworthy.
Based on the types of harms perceived in these studies, we argue that prohibitions on deepfake pornographic videos should receive the same treatment under the First Amendment as prohibitions on traditional nonconsensual pornography rather than being dealt with under the less-protective law of defamation. In contrast, nonpornographic deepfakes can likely only be dealt with via defamation law. Still, there may be reason to allow for enhanced penalties or other regulations based on the greater harm people perceive from a defamatory deepfake than a defamatory written story.