Skip to main content
Advertisement

Over 70 Million Warnings Issued to Users Attempting to Access Child Abuse Material Online

Over 70 million warnings have been sent to individuals attempting to access child sexual abuse material online, directing them to support services. Project Intercept operates globally with tech firms to disrupt harmful behaviour and encourage help-seeking.

·4 min read
Getty Images Rear view of little girl with ponytails, during sunset.

Warnings Sent to Users Accessing Child Sexual Abuse Material

More than 70 million warning messages have been sent over the past two years to individuals attempting to access child sexual abuse material (CSAM) online, according to the Lucy Faithfull Foundation.

These messages are part of Project Intercept, a collaboration between the child protection charity and technology companies including Google, TikTok, and Meta.

Instead of simply blocking access to illegal content, the warnings inform users about the illegality of viewing CSAM and direct them to support services designed to encourage behavioural change.

The foundation reported that nearly 700,000 people have accessed its Stop It Now resources, which provide confidential advice and self-help tools. However, some experts consider this engagement rate to be low given the volume of warnings issued.

"Given that 70 million warning messages have been sent, the fact that only 700,000 people click through to get support seems low. This is disappointing, given that the scale of the problem of child sexual abuse imagery online is growing fast," said Professor Sonia Livingstone, director of the Digital Futures for Children centre at London School of Economics.
"On the other hand, since four in five of those people who seek support do engage with the resources provided, that suggests the system is working for those who are really motivated to get help."
Lucy Faithfull Foundation An example of the wording and tone used in a warning message on TikTok.

Disrupt and Divert

Project Intercept operates in 131 countries and covers various online environments, including end-to-end encrypted services—where only sender and recipient can view messages—and AI chatbot platforms.

The foundation did not disclose the number of individual users responsible for the searches but noted that engagement with support materials remains high, with an average of 28,000 users redirected monthly in 2024 and 2025.

More than 80% of these users continued interacting with the content, although the organisation has not published data on long-term behavioural outcomes.

Deborah Denis, chief executive of the Lucy Faithfull Foundation, stated:

"By placing warnings at the moment harmful behaviour is happening, we can disrupt it and divert people towards help to change,"

adding that the approach has potential for further expansion.

Ad (425x293)

The NSPCC, a children's charity, commented that such interventions could be important in disrupting harmful behaviour but should be part of a broader strategy aimed at preventing the creation and sharing of illegal material.

The charity also urged technology companies to intensify efforts to combat the spread of such content.

"Too Easy to Share"

Emma Hardy, Communications Director at the Internet Watch Foundation, emphasised the need for "innovative solutions," particularly for parts of the internet that use end-to-end encryption.

"As it is, it is simply too easy to share and distribute child sexual abuse imagery online, and for children to become trapped in cycles of exploitation," she said.
"Safety by design needs to be a guiding principle and new products and platforms must be built to make sure there is nowhere for this sort of behaviour to hide."

Ofcom, the UK communications regulator, indicated that warning messages are part of its expectations under the Online Safety Act.

Almudena Lara, Child Protection Policy Director at Ofcom, noted that the data reflects both progress and the significant challenges that remain.

Technology firms involved in Project Intercept stated that the approach complements existing content moderation systems.

Griffin Hunt, a product manager at Google Search, reported that changes implemented in early 2025 have resulted in "greater engagement with therapeutic help services" and a reduction in follow-up searches for illegal material.

Mega, a company providing encrypted cloud storage, also participates in the project and highlighted that it challenges the notion that encrypted services cannot intervene early to address harmful behaviour.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? here.

This article was sourced from bbc

Advertisement

Related News