Dave Lee, Columnist

Anthropic Isn’t Exaggerating About an AI Panopticon

Sounding the alarm.

Photographer: Joel Saget/AFP/Getty Images

In the debate about the military’s use of artificial intelligence, prompted by Anthropic’s dispute with the Pentagon that’s now headed to the courts, much has been said about the concerns related to autonomous killing. Less examined has been the AI company’s second point of contention: how AI might be used to conduct mass surveillance on Americans. A recent study offers a glimpse at the root of the company’s well-justified trepidation.

Challenging the Pentagon’s designation of the company as a “supply chain risk,” Anthropic argued in a filing that “AI tools like Claude enable collection and analysis of information at speeds and scales not previously contemplated, posing unique risks for civil liberties given the potential for errors and misuse.”