Accelerating AI

McGinnis, John O. | April 19, 2010

Recently, Artificial Intelligence (AI) has become a subject of major media interest. For instance, last May the New York Times devoted an article to the prospect of the time at which AI equals and then surpasses human intelligence. The article speculated on the dangers that such an event and its “strong AI” might bring. Then in July, the Times discussed computer-driven warfare. Various experts expressed concern about the growing power of computers, particularly as they become the basis for new weapons, such as the predator drones that the United States now uses to kill terrorists. These articles encapsulate the twin fears about AI that may impel regulation in this area—the existential dread of machines that become uncontrollable by humans and the political anxiety about machines’ destructive power on a revolutionized battlefield. Both fears are overblown. The existential fear is based on the mistaken notion that strong artificial intelligence will necessarily reflect human malevolence. The military fear rests on the mistaken notion that computer-driven weaponry will necessarily worsen, rather than temper, human malevolence. In any event, given the centrality of increases in computer power to military technology, it would be impossible to regulate research into AI without empowering the worst nations on earth.