Surely due diligence would dictate proactive steps to prevent the creation of such groups, backed up by quick action to remove any that get through once they are flagged and reported. I would have thought so. Until I stumbled into these groups and began, with rising disbelief, to find it impossible to get them taken down.

Children are sharing personal images and contact information in a sexualized digital space, and being induced to join private groups or chats where further images and actions will be solicited and exchanged.

Even as debate over Congress’ Earn It Act calls attention to the use of digital channels to distribute sexually explicit materials, we are failing to grapple with a seismic shift in the ways child sexual abuse materials are generated. Forty-five percent of US children aged 9 to 12 report using Facebook every day. (That fact alone makes mockery of Facebook’s claim that they work actively to keep children under 13 off the platform.) According to recent research, over a quarter of 9- to 12-year-olds report having experienced sexual solicitation online. One in eight report having been asked to send a nude photo or video; one in 10 report having been asked to join a sexually explicit livestream. Smartphones, internet access, and Facebook together now reach into children’s hands and homes and create new spaces for active predation. At scale.

Of course I reported the group I had accidentally uncovered. I used Facebook’s on-platform system, tagging it as containing “nudity or sexual activity” which (next menu) “involves a child.” An automated response came back days later. The group had been reviewed and did not violate any “specific community standards.” If I continued to encounter content “offensive or distasteful to you”—was my taste the problem here?—I should report that specific content, not the group as a whole.

“Buscando novi@ de 9,10,11,12,13 años” had 7,900 members when I reported it. By the time Facebook replied that it did not violate community standards, it had 9,000.

So I tweeted at Facebook and the Facebook newsroom. I DMed people I didn’t know but thought might have access to people inside Facebook. I tagged journalists. And I reported through the platform’s protocol a dozen more groups, some with thousands of users: groups I found not through sexually explicit search terms but just by typing “11 12 13” into the Groups search bar.

What became ever clearer as I struggled to get action is that technology’s limits were not the problem. The full power of AI-driven algorithms was on display, but it was working to expand, not reduce, child endangerment. Because even as reply after reply hit my inbox denying grounds for action, new child sexualization groups began getting recommended to me as “Groups You May Like.”

Each new group recommended to me had the same mix of cartoon-filled come-ons, emotional grooming, and gamified invites to share sexual materials as the groups I had reported. Some were in Spanish, some in English, others in Tagalog. When I searched for a translation of “hanap jowa,” the name of a series of groups, it led me to an article from the Philippines reporting on efforts by Reddit users to get child-endangering Facebook groups removed there.