David Frattare, Ohio’s Internet Crimes Against Children Task Force commander.
A display in the ICAC Task Force’s office shows the criminals apprehended so far  in calendar year 2026.

Investigators who fight child sex crimes are facing an avalanche of abuse reports made exponentially worse by AI.

The deluge of AI-generated content can make it difficult to distinguish when a real child is in danger. Plus, new AI detection tools have created an explosion of dead-end tips.

Underfunded law enforcement can’t keep up.

Photographers: Madeleine Hordinski/Bloomberg; Jeremy M. Lange/Bloomberg (2)

Technology | The Big Take

The Insurmountable Flood

Investigators say child predators are now only limited by their imagination.

William Michael Haslach was a lunch monitor and traffic guard at a suburban Minnesota elementary school for years — a familiar face to young children who struck up conversations during recess and gave him hand-drawn cards on Valentine’s Day.

On the job, Haslach sometimes took photos of the kids. When he went home, he allegedly used artificial intelligence tools to digitally undress them in those images, staging them in sexual acts.

The photos and videos he created were “obscene” and “graphic,” according to a 2025 indictment. They depicted the elementary students naked with adult genitalia, some in eroticized positions on the playground. Federal agents have identified more than 90 victims, according to court filings, which also say Haslach had almost 800 AI-generated images depicting the sexual abuse of children on his devices. Haslach, who is still awaiting trial, faces up to life in prison. An attorney for Haslach did not respond to requests for comment.

Child pornography has always been a major scourge on the internet, but the emergence of free, easy-to-use AI tools has created a novel threat for law enforcement squads that specialize in child sexual abuse cases. These groups, known as Internet Crimes Against Children Task Forces, or ICACs, are struggling to keep up at a time when it’s easier than ever for pedophiles to create illegal content — which is taking on new forms in the AI era.

Today, law enforcement must parse through abusive material ranging from conversations with AI chatbots like OpenAI’s ChatGPT — where offenders fantasize about sexual acts with kids, or seek advice about how to groom them — to photos and videos generated by AI tools like Stability AI’s Stable Diffusion, which can create graphic content from a simple text request. Often, investigators can’t easily tell whether the children in pornographic images are real kids in imminent danger, AI adaptations of regular child photos, or outright fakes. That distinction is important, as it can determine which cases demand urgent attention.

Confiscated evidence at the Internet Crimes Against Children Task Force office in Cleveland, Ohio.
Confiscated evidence at the Internet Crimes Against Children Task Force office in Cleveland, Ohio.
Confiscated evidence at the Internet Crimes Against Children Task Force office in Cleveland, Ohio. Photographer: Madeleine Hordinski/Bloomberg

The tools used to create and share these images can be mainstream; three minors from Tennessee sued Elon Musk’s xAI earlier this year, alleging the company’s Grok image generator was used to digitally remove their clothing or put them in sexually explicit poses. A spokesperson for xAI did not respond to a request for comment. Increasingly, social media sites like Meta Platforms Inc.’s Facebook and Instagram have also become a starting point for this content, with offenders pulling innocuous pictures of children from social feeds or profiles and using AI tools to warp them in unthinkable ways.

This explosion in AI-altered pornographic images and videos is upending the child safety ecosystem, straining budgets and making it harder to find pedophiles, according to interviews and data collected from nearly two dozen of the country’s 61 child safety task forces. The complexity of AI-related reports is adding to investigators’ workloads, which are so unusually high that there simply isn’t enough time to properly investigate most tips. The problem has been compounded by tech companies’ reliance on AI to detect and report offensive images, which investigators said has led to a flood of useless leads.

Meanwhile, US funding has failed to keep pace with the influx, leaving these teams understaffed and under-resourced, including when it comes to the mental health support that officers — many of whom are parents themselves — need to cope.

The rise of AI-created child pornography is making it harder and more time-consuming for law enforcement to prioritize its investigations, said special agent Bobbi Jo Pazdernik, who is in charge of predatory crimes at the Minnesota Bureau of Criminal Apprehension in St. Paul, a short drive from the elementary school where Haslach worked.

Sometimes, “there’s multiple of us standing around a computer with our noses literally up to the computer trying to determine: Is this real or is this AI-generated?” Pazdernik said. Every hour that investigators spend trying to find a child that doesn’t exist means less time to save a real one urgently in harm’s way.

Similar scenes are playing out in police departments, district attorney’s offices and state bureaus of investigation across the US, as more people get their hands on tools that make it easy for virtually anyone to create a photorealistic image. “It is sort of only up to the imagination of the offender as to the severity of those and how violent they might be,” said Steven Grocki, chief of the Justice Department’s Child Exploitation and Obscenity Section.

People are skimming images from Facebook and Instagram – things that parents are just freely posting to the Internet

In Ohio, a man pleaded guilty after using the faces of boys in his community to generate AI content of them having sex with their mothers or grandmothers. In Wisconsin, a man was convicted of using a generative AI text-to-image tool, Stable Diffusion, to produce and finetune sadistic images of babies and toddlers. In Florida, a man was charged after allegedly using AI to alter an image of a prepubescent girl to appear to have adult breasts and then uploading that material to OnlyFans. In North Carolina, a pastor was sentenced to a decade in prison after spending years collecting child pornography on the church computer and using AI to make even more of it. And in Alaska, an army soldier is awaiting trial for allegedly using AI to sexualize images of kids he knew — all while stationed on a military base.

These are just a handful of the thousands of complex cases that US law enforcement officials are processing and pursuing.

“How do we get through this volume? We can’t take anymore,” said Pazdernik, the Minnesota-based special agent. “We’re doing this massive volume with the same amount of resources that we’ve had consistently for the last 10 years.”

“We don’t want to miss a child that is being sexually abused,” she said.

The National Center for Missing & Exploited Children (NCMEC) headquarters in Alexandria, Virginia.
The National Center for Missing & Exploited Children (NCMEC) headquarters in Alexandria, Virginia. Photographer: Kent Nishimura/Bloomberg

While child predators have for decades relied on photo editing software to alter images, AI tools make it possible to create or transform any photo or video almost instantly. This troubling content is often called child pornography, but experts and law enforcement also refer to it as “child sexual abuse material,” or CSAM, because it’s evidence of a non-consensual crime.

When US-based technology and social media companies discover this problematic content, they are required to report it to the National Center for Missing & Exploited Children (NCMEC), a Congressionally mandated nonprofit that analyzes the tips for clues about the potential location of the incident, including IP addresses and users’ account information. It then passes on relevant details to state and local law enforcement, where folks like Pazdernik investigate them. When the tips reveal complicated cross-border activity or potential criminal networks, they get referred to federal authorities.

While AI-related reports were still a small percentage of the 21.3 million shared with NCMEC in 2025, Fallon McNulty, executive director of its Exploited Children Division, is seeing a dramatic rise in the use of AI to generate, manipulate or advance child sexual abuse. Last year, the clearinghouse received 1.5 million reports of suspected CSAM with ties to AI tools. A year prior, it received 67,000 reports, and in 2023, just 4,700.

Of those 1.5 million, 7,000 included instances of people successfully generating or possessing AI-generated exploitive material, and another 30,000 detailed attempts to generate such material. NCMEC also received 145,000 reports of people turning to AI to manipulate existing child sexual abuse files, and 3,000 reports involving people asking chatbots or other text-based AI tools to help with grooming, exploitation or role-play.

Listen: Levittown Podcast, a real-life horror story for the AI generation

Across Silicon Valley, companies that are training AI models on vast tranches of data scraped from the internet are also wrestling with how to ensure their tools don’t contain or spit out this explicit material. Last year, Amazon.com Inc. submitted 1.1 million reports of suspected child sexual abuse images that it identified in its AI training data — making it an outlier among its peers — though it said it caught and removed that material before its models were actually trained on it.

Stability AI, creator of Stable Diffusion, said a different company operated the model involved in the Wisconsin case, and that newer tools under its control have features aimed at preventing the misuse of its AI. An OpenAI spokesperson said the company “strictly prohibits any use of our models to create or distribute content that exploits or harms children, including attempts to groom or manipulate them.” She added that the company’s systems are designed to refuse these requests and take actions when violations occur.

Even as NCMEC collects this wide range of data, McNulty said it’s difficult to fully capture and quantify the ways that AI is changing the illicit world of child pornography. Companies aren’t required to go out of their way to look for CSAM, or say whether it came from an AI tool. Often, they fail to provide information that distinguishes AI-generated material from images depicting real abuse, she said.

Between 2023 and 2026, the nonprofit, through its own review, identified more than 10 times as many files containing AI-generated CSAM than companies themselves reported, McNulty said.

Fallon McNulty, executive director of the Exploited Children Division at the National Center for Missing & Exploited Children (NCMEC).
Fallon McNulty, executive director of the Exploited Children Division at the National Center for Missing & Exploited Children (NCMEC). Photographer: Kent Nishimura/Bloomberg

The process of determining whether an actual child might be in danger could take hours – and even then may be a false positive. Ambiguous cases are still passed further downstream to local law enforcement offices, which may spin up investigations or send agents into the field, all before realizing the apparent crime was fabricated by AI.

“That could really take a lot of time away — days, weeks — of trying to find a child that doesn’t exist,” McNulty said.

Lifelike AI-generated videos are also on the rise. The UK-based Internet Watch Foundation identified a striking increase in realistic AI videos of child sexual abuse. In 2025, the nonprofit discovered 3,443 such videos; a year prior, it found just 13. Most of the videos identified last year were of the most extreme type, depicting things like sexual torture, penetration and sex with animals.

“We have seen just this massive increase – even from within the last few years to where we are now — that I think has us all a little worried about what 2026 and 2027 are ultimately going to bring,” David Frattare, who has served as commander of Ohio’s Task Force for over a decade, said of the surge in tips overall. “I’ve had a lot of sleepless nights.” He said he sometimes sits at work after everyone else is gone, wondering: “Did I make the right call? Am I working the right cases?”

David Frattare, Ohio’s Internet Crimes Against Children Task Force commander, at the ICAC office in Cleveland, Ohio.
David Frattare, Ohio’s Internet Crimes Against Children Task Force commander, at the ICAC office in Cleveland, Ohio. Photographer: Madeleine Hordinski/Bloomberg

“Everybody feels like we’re probably missing something as technology speeds along,” he added.

Joe O’Barr, a detective at the sheriff’s office in Florida’s Flagler County, said it’s critical that investigators give equal attention to cases involving AI. The new technology enables pedophiles to push further, allowing them to virtually live out disturbed fantasies of children they know in the real world.

“People are skimming images from Facebook and Instagram – things that parents are just freely posting to the Internet – and they’re taking those images and then entering prompts into numerous AI platforms," said O’Barr, who serves as the county’s sole child-exploitation investigator, handling what he describes as an “insurmountable” flood of reports each year.

For him, the fact that the children in many AI-manipulated images are often personally known to the perpetrator raises the stakes.

Last year, after parsing through a backlog of tips, O’Barr began investigating a case involving a man who took photos of his partner’s 6-year-old daughter and used AI tools to manipulate them. The man removed clothing and added sexualized features before storing and transmitting the images through his OnlyFans account, where he shared them with other users. Because the man was considered a content creator by the platform, the images carried his watermark, making them easily traceable.

OnlyFans identified and removed the material, deactivated the man’s account and reported the case to NCMEC, which then routed the tip to Florida authorities and O’Barr.

The fact that the perpetrator knew the victim in question “made my hair stand up,” O’Barr said. It meant the girl could be in real, imminent danger.

“Ideation is the biggest predictor of hands-on offense,” he said.

Detective Joseph "Joe" O'Barr of the North Florida INTERCEPT Task Force, in Jacksonville, Florida.
Detective Joseph "Joe" O'Barr of the North Florida INTERCEPT Task Force, in Jacksonville, Florida. Photographer: Malcolm Jackson/Bloomberg

In order to find the cases with that level of urgency, investigators rely on companies like OnlyFans to send tips. But because reviewing this content can be traumatic to workers, as well as resource-intensive and costly to do at scale, many companies have tried to take humans out of the process when reporting it. Today, companies like Meta’s Instagram and Google’s YouTube use AI to find, remove or flag all kinds of problematic material, including posts depicting the sexual abuse of kids, before passing it on to federal and state authorities.

But the AI doesn’t always get it right, and has ended up overloading authorities with false positives. Kevin Roughton, commander of North Carolina’s Task Force, said his office used to receive more “actionable” reports. But with AI now replacing many human content moderators, and companies sweeping up even more posts thanks to new reporting laws, “we’re drowning in tips,” he said.

From 2019 to 2026, North Carolina saw an 11-fold increase in tips. Last year alone, the volume nearly doubled to more than 52,000 reports. Much of this spike is thanks to AI and companies’ automated handling of reports of explicit content, he said.

Roughton’s team now spends much of its time slogging through cases that any discerning human reviewer would “immediately tell is not a crime.” One common example he shared: Teenage boys playing a video game while talking smack. When one says to the other, “Suck my d*ck,” it gets reported as a child being solicited for a sex act, Roughton said, even though this kind of crude banter isn’t illegal.

“It becomes triage,” he said. “The more quantity we have, the harder it becomes to deal with each individual case.”

Kevin Roughton, a special agent in charge o North Carolina’s nternet Crimes Against Children Task Force, in Raleigh, North Carolina.
Kevin Roughton, a special agent in charge of North Carolina’s Internet Crimes Against Children Task Force, in Raleigh, North Carolina. Photographer: Jeremy M. Lange/Bloomberg

Other investigators shared similar frustrations. “We get a lot of tips from Meta that are just kind of junk,” Benjamin Zwiebel, a special agent with New Mexico’s Task Force, testified in February during a trial launched by the state against the social media giant. New Mexico Attorney General Raúl Torrez blamed tech companies’ reliance on AI for what he described as a “significant decline” in the quality of tips. “It actually makes our job a lot more difficult and a lot more complicated,” Torrez said.

A spokesperson for Meta, which submits more reports of suspected child abuse material to NCMEC than any other company, said it uses AI to proactively identify new exploitive material, which is then reviewed by a human. Ravi Sinha, Meta’s head of child safety policy, acknowledged during testimony in New Mexico that “we haven’t been perfect” and there is “some noise built into the system.” But Sinha also pointed to broad, federal reporting laws that Meta must follow, which sometimes require the company to flag things that aren’t illegal in every state.

Junk tips have incensed Congress. Earlier this month, Republican Sen. Chuck Grassley, chairman of the Senate Judiciary Committee, opened a probe into Meta, xAI, Amazon and four other tech and social media companies, citing broad “reporting deficiencies.”

Investigations had been stymied by the companies reporting content totally unrelated to child exploitation, while failing to address false positives, the committee said. The companies had also submitted “millions of reports that lack basic information” about perpetrators, such as location data, added Grassley, who sent letters to each of the companies’ CEOs requesting additional information.

The Block Box allows members of the ICAC Task Force to power on and secure data from seized devices, such as phones and hard drives, without having to worry about remote access. The Box blocks all outside signals from reaching the device.
The Block Box allows members of the ICAC Task Force to power on and secure data from seized devices, such as phones and hard drives, without having to worry about remote access. The Box blocks all outside signals from reaching the device. Photographer: Jeremy M. Lange/Bloomberg

Grassley acknowledged that Meta and xAI have made some improvements to their reporting process. Meta said in a statement to Bloomberg News that it will continue to make refinements, while xAI didn’t respond to requests for comment.

Grassley specifically called out Amazon for the reports about suspected CSAM found in its AI training data, none of which were “actionable.” Amazon previously told Bloomberg, which first reported on the issue in January, that it uses an “over-inclusive threshold” for scanning AI training data.

An Amazon spokesperson declined to comment on Grassley’s probe, but told Bloomberg in April that it has enhanced its detection capabilities that scan AI training data, reducing its false positive rate. That rate had been high, according to the company, topping 99% for the 1.1 million reports it submitted last year. The spokesperson added that Amazon was also making changes to provide more actionable information in its reports.

NCMEC’s McNulty said that, so far, reports from Amazon’s AI services “continue to contain no actionable information.”

Across the US, teams that handle this grueling work to protect children say they are drastically under-resourced.

These teams are funded primarily by the Department of Justice, through an agency known as the Office of Justice Programs, which in a typical year spends a little over $30 million on grants for all 61 task forces, plus several million more for training programs and tools that support them. That’s a tiny sliver of the broader DOJ budget, and roughly equivalent to what the FBI spends in a single day.

Despite the avalanche of reports of child sexual abuse material made over the last five years, funding for task forces has remained largely stagnant.

In the fiscal year that ended last September, the total Task Force budget was $40.7 million, a 2% increase from the year prior, but a 4% decrease from 2023, according to federal data. Even with the slight uptick, 13 task forces saw their budgets shrink. The DOJ determines the awards using a formula that considers population, caseload and other factors.

For the fiscal year 2026, already underway, Congress has appropriated $105 million for Missing and Exploited Children programs, a shared pool that includes the ICACs, but also money for NCMEC and other programs like AMBER Alert. That money is still being allocated, the agency said, meaning it’s unknown how much the ICAC program will get.

But earmarking money for the Task Force Program is different from distributing it. Historically, task forces and training programs associated with them receive their annual funds around the fall, but several told Bloomberg that payments have arrived later and later in recent years. While funds don’t need to be spent during the fiscal year they are assigned, delays make it challenging to map out priorities.

Abuse Tips Overwhelm Law Enforcement

In just five years, tips about child sexual abuse material have quadrupled in some places

In early April, three task forces were still waiting on funding they were expected to receive about six months earlier. Roughton’s North Carolina Task Force, along with more than half a dozen others, received its funding just weeks earlier, in March.

“We’ve been backed into a corner,” said Roughton, who turned to state resources and other grants to afford things like key forensic tools before federal funding arrived.

Funding for task force training and mental health programs – which typically amounts to several million dollars per year – had also not been distributed as of mid-April. The delay led to layoffs and reduced course offerings at Fox Valley Technical College, where investigators are trained in undercover tactics, digital forensics and how to engage traumatized victims, according to people familiar with the matter, who asked not to be identified as they were not authorized to speak publicly. A spokesperson for the Office of Justice Programs said that funding applications for training and mental health services are still under review. Fox Valley did not respond to a request for comment.

While some task forces receive additional state funding, others aren’t so lucky. The resource gap can have a tangible impact on their ability to hire, retain, train, deploy and equip investigators with the needed technology.

In Delaware, most of the personnel on its task force are now paid for by the state, as are travel costs, equipment and training. Similarly, in Oregon, the team added 14 full-time employees in 2025, almost entirely funded with state grants. The South Dakota Task Force said federal funding has not kept pace with the cost of digital forensic technology necessary to conduct today’s complex investigations; its federal grant was $70,000 less last year than the year before. And in Virginia’s Bedford County, staffing shortages exacerbated by funding constraints mean investigators or examiners must also take on other assignments unrelated to child exploitation.

Flat Funding

Money awarded to the Internet Crimes Against Children Task Force Program

In a series of emails, an OJP spokesperson said that it aims to provide funding to grant recipients as quickly as practicable, and that keeping children safe from sexual abuse is a top priority.

The department’s funding of child exploitation work is ultimately set by Congress, meaning that any substantial increase in resources would require lawmakers to up the ante. For years, they haven’t.

Rep. Debbie Wasserman Schultz, a Florida Democrat who has long pushed legislation aimed at fighting online child exploitation, said Congress knows that investigators’ “major obstacle” has always been resources. She helped pass a law last December that reauthorized support for the Task Force Program, but remains skeptical that funding will increase any time soon, blaming Republican opposition to budget increases.

Bloomberg reached out to more than a half-dozen Republican lawmakers involved in child safety legislation, but some did not respond, while others touted bipartisan support for the program and the resources it needs to pursue predators. Iowa’s Grassley said he is also “working to ensure that the brave men and women tasked with combating online child exploitation have the resources they need to carry out their important work.”

Tom Kerle, who helped establish the Massachusetts Task Force, has been at the front lines of the child safety fight for nearly 30 years. After retiring from the force in 2011, Kerle became a founding board member at Raven, a nonprofit organization that lobbies on behalf of the task forces in Washington. “It’s a national kind of embarrassment that we’re in the situation that we are,” he said of the budgets.

The lack of funding is particularly troubling at a time when AI has emboldened child predators, he added. “Without AI, an offender might have been able to reach two or three kids,” Kerle said. “With AI, an offender might be able to reach 40 kids, or 50 kids. It’s basically fishing with dynamite.”

Ohio’s Internet Crimes Against Children training room in Cleveland, Ohio.
Ohio’s Internet Crimes Against Children training room in Cleveland, Ohio. Photographer: Madeleine Hordinski/Bloomberg