AI Is Moving Fast Enough to Break Things. Sound Familiar?

Everyone is freaking out about artificial intelligence, but the risks of disinformation, for one, are far more worrying than apocalyptic scenarios.

Photo Illustration: 731; photos: Alamy

In January 2015, the newly formed—and grandly named—Future of Life Institute (FLI) invited experts in artificial intelligence to spend a long weekend in San Juan, Puerto Rico. The result was a group photo, a written set of research priorities for the field and an open letter about how to tailor AI research for maximum human benefit. The tone of these documents was predominantly upbeat. Among the potential challenges FLI anticipated was a scenario in which autonomous vehicles reduced the 40,000 annual US automobile fatalities by half, generating not “20,000 thank-you notes, but 20,000 lawsuits.” The letter acknowledged it was hard to predict what AI’s exact impact on human civilization would be—it laid out some potentially disruptive consequences—but also noted that “the eradication of disease and poverty are not unfathomable.”

The open letter FLI published on March 29 was, well, different. The group warned that AI labs were engaging in “an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for an immediate pause on the most advanced AI research and attracted thousands of signatures—including those of many prominent figures in the field—as well as a round of mainstream press coverage.