Skip to main content
Advertisement

Meta Faces Trial Over Child Safety and Alleged Prioritization of Profit

Meta is on trial in New Mexico over allegations it prioritized profit over child safety on Instagram and Facebook. Evidence includes internal documents, delayed abuse reports, and concerns about addiction and mental health impacts on teens.

·10 min read
Meta's logo as a blurry car at the left fore passes by

Meta on Trial Over Child Safety Practices

Meta is currently undergoing a significant trial in New Mexico concerning its child safety measures. Prosecutors allege that the company prioritized profit and user engagement over protecting children, particularly on its platforms Instagram and Facebook.

The trial has reached its fifth week, with the state attorney general concluding the case on March 5. The proceedings are expected to continue for an additional week as Meta prepares its defense before the jury begins deliberations.

Key evidence includes internal company documents obtained during discovery, such as emails between Meta executives highlighting urgent concerns about exploitation on Instagram and Facebook.

“Data shows that Instagram and Facebook had become the leading two-sided marketplace for human trafficking,”

stated an email addressed to Adam Mosseri, head of Instagram, from a member of Meta’s product team in 2019, which was read in court.

Prosecutors presented evidence indicating delays and shortcomings in Meta’s ability to detect and report child harms on its platforms, including the distribution of child sexual abuse material (CSAM) and child trafficking.

Both the New Mexico trial and a separate case in Los Angeles scrutinize Facebook and Instagram features for their alleged negative impact on children’s mental health. Plaintiffs claim these networks are intentionally addictive and amplify content promoting self-harm, suicidal thoughts, and body dysmorphia.

Meta’s defense strongly rejects these allegations, describing them as “sensationalist, irrelevant and distracting arguments.” Company executives, including Mosseri and Meta CEO Mark Zuckerberg, have testified defending the company’s safety record and emphasizing the challenges of preventing all crimes on platforms with billions of users worldwide.

“We do our best to keep people safe, but we cannot guarantee it,”

said Mosseri, who appeared in Santa Fe as a defense witness after his video deposition was played earlier in the trial.
“Safety is incredibly important to us.”

The lawsuit follows a 2023 investigation by revealing Meta’s difficulties in preventing its platforms from being used for child trafficking. This investigation is cited multiple times in the lawsuit filings.

The cases raise a fundamental question for Meta: can it effectively protect its next generation of users? To sustain and grow its social networks, Meta needs to attract younger users. The company argues its platforms offer safer environments than alternatives, while the New Mexico attorney general and plaintiffs in the Los Angeles trial contend that Meta fails to adequately protect teens and designs addictive products targeting young people. Child safety advocates testified that Messenger’s encryption and a large backlog in Meta’s child abuse reports have hindered investigations into exploitation.

Documents show Meta’s strong focus on young users. One internal note reads:

“Mark has decided that the top priority for the company in 2017 is teens,”
referring to Zuckerberg. The CEO denied targeting users under 13, the minimum age for account creation, but acknowledged enforcement of age restrictions is challenging.

Meta faces global regulatory scrutiny amid these trials. Countries are following Australia’s ban on social media use for those under 16. Platforms like TikTok and Snapchat have committed to age gates. The New Mexico and Los Angeles trials, if resulting in findings of liability for child sexual abuse trafficking and intentional addiction, could influence lawmakers to restrict Meta’s access to younger users.

Operation MetaPhile

A key element of New Mexico’s case is an investigation named “Operation MetaPhile” conducted by the attorney general’s office. Undercover agents posing as girls under 13 were contacted by three suspects who allegedly solicited sex after using Facebook and Instagram design features to locate minors. Two suspects arranged to meet the undercover agent at a motel in Gallup, New Mexico.

The agents did not initiate sexual conversations, according to court filings. One undercover account received hundreds of friend requests daily and amassed 7,000 followers within a month. Despite this activity, Meta did not disable the account and instead provided guidance on monetizing and growing the account, investigators said.

The state also alleged that Instagram’s algorithms facilitate connections between pedophiles or help them find sellers of child sexual abuse material, which Mosseri described as “unfair.”

“I think what we see with these particularly bad actors is they really actively try to work around our systems by disguising things,”

Mosseri said.
“They try to find each other on our platform.”

Former Meta executives testified against the company. Brian Boland, former vice-president of partnerships who worked at Meta for 11 years before leaving in 2020, stated:

“I absolutely did not believe that safety was a priority, which is the primary reason that I left.”

Encrypted Messenger Blocks Access to Evidence

The court heard that Meta’s decision to implement end-to-end encryption on Facebook Messenger has obstructed access to critical evidence of crimes such as grooming and child abuse imagery exchange.

In December 2023, Meta introduced encryption for Messenger, converting messages into unreadable code viewable only by sender and recipient. This content is not stored on Meta’s servers and is inaccessible to law enforcement.

The National Center for Missing & Exploited Children (NCMEC), partially funded by Meta, described the encryption as a “devastating blow to child protection.” Representatives met with Meta multiple times to discourage the implementation, the court was told.

US social media companies are federally mandated to report CSAM, child sexual abuse trafficking, and coercion or enticement of minors to NCMEC, which forwards reports to law enforcement.

Encryption removes visibility into content and interactions, but abuse continues, testified Fallon McNulty, executive director of NCMEC’s exploited children division.

“Visibility into content or interactions that are occurring is taken away. That doesn’t mean that the abuse stops occurring.”

McNulty stated that Meta submitted 6.9 million fewer reports to NCMEC in 2024 following Messenger’s encryption compared to the previous year.

Meta defends encryption, arguing users can report inappropriate interactions and that encryption protects privacy against surveillance.

“We use sophisticated technology to proactively identify child exploitation content on our platform – and between July and September 2025 over 10 million pieces of child exploitation content from Facebook and Instagram, over 98% of which we found proactively before it was reported,”

a Meta spokesperson said.
“We also provide in-app reporting tools, with dedicated options to let us know if content involves a child.”

Advertisement

McNulty emphasized that relying on children to self-report abuse is inadequate compared to scanning messages, especially since most children do not report threats or abuse.

Mosseri acknowledged that self-reporting on Instagram is less effective than technological scanning, despite Meta’s claims about Messenger encryption. Plans to encrypt Instagram direct messaging were abandoned because encryption would hinder child safety efforts.

“We find that using technology seems to be much more effective than user reports to find bad content.”

Reporting Backlogs and Errors Impacted Child Safety

The jury learned that between May 2017 and July 2021, Meta had a backlog of 247,000 cyber tip reports about potential harms and abuses, some delayed by weeks or months before being sent to NCMEC. Such delays may have prevented crime prevention or perpetrator identification.

Thousands of other reports were misclassified as low priority without explanation, which NCMEC considered a “serious failing that affected child safety,” McNulty testified.

Law enforcement expressed frustration with insufficient detail in Meta’s reports, limiting their ability to investigate. Some officers described the reporting system as containing “junk” tips, while other platforms provided more actionable information.

In 2022, 31 of 61 Internet Crimes Against Children (ICAC) taskforces opted out of receiving some lower-priority reports from Meta due to poor quality.

McNulty noted these quality issues had persisted for years and expected earlier resolution.

“Our image-matching system finds copies of known child exploitation at a scale that would be impossible to do manually, and we work to detect new child exploitation content through technology, reports from our community and investigations by our specialist child safety teams,”

a Meta spokesperson said.
“We also continue to support NCMEC and law enforcement in prioritizing reports, including by helping build NCMEC’s case management tool and labelling cyber tips so they know which are urgent.”

previously reported that AI-generated tips without human review often cannot be opened by law enforcement due to Fourth Amendment protections, slowing investigations.

At trial, it was revealed that in 2022, over 14 million of Meta’s reports to NCMEC lacked human review, preventing law enforcement access without a warrant. Meta had been informed of this issue multiple times, McNulty testified.

Teens, Addiction, Filters, and Mental Health Concerns

In a video deposition played in court, Zuckerberg acknowledged that some users, including children, find Meta’s platforms addictive, a subject also addressed in the Los Angeles trial.

Internal Instagram documents revealed the company’s awareness of tween users despite its 13-and-over policy. A 2018 presentation stated:

“If we wanna win big with teens, we must bring them in as tweens.”

Another 2015 document estimated that about 30% of US children aged 10-12 used Instagram. Additional documents detailed goals to increase time spent on Instagram by 10-year-olds and compared login frequency of 11-year-olds with older users.

At the New Mexico trial, Ian Russell, whose daughter Molly died by suicide in 2017 after exposure to harmful Instagram content, testified about the platform’s potential mental health effects.

“That inescapable stream of harmful content, the cumulative effect that content would have had on a growing brain, a young person, a 14-year-old, turned Molly from that bright, hopeful young person into someone who unbelievably thought she was a burden and a problem and that the best thing for her to do would be to end her life.”

Evidence included internal communications about Instagram’s augmented-reality filters that alter appearance, such as enlarging lips or eyes. A former Meta employee warned Zuckerberg that these features increased risks of self-image and mental health issues among teens.

“As a parent of two teenage girls, one of whom has been hospitalized twice for body dysmorphia, I can tell you, the pressure on them and their peers coming through social media is intense with respect to body image,”

the former employee wrote.

Jurors heard that these filters were temporarily banned in October 2019 and reinstated by Zuckerberg in mid-2020.

“It has always felt paternalistic to me that we’ve limited people’s ability to present themselves in these ways, especially when there’s no data I’ve seen that suggests doing so is helpful or not doing so is harmful, and that there’s clearly demand for this type of expression,”

Zuckerberg said regarding his decision.

A company spokesperson added:

“Meta bans those that directly promote cosmetic surgery, changes in skin color or extreme weight loss.”

Other documents alleged Zuckerberg approved minors interacting with AI chatbot companions despite safety staff warnings about potential sexual conversations. Prosecutors also claimed Meta profited from companies like Walmart and Match Group alongside content sexualizing children.

Meta stated:

“Instagram has built-in protections, which limit who can contact them, and the type of content they see, defaulting them into private accounts and the strictest message settings, so they can only be messaged by people they follow or are already connected to. Teens under 18 are automatically placed into teen accounts, and teens under 16 will need a parent’s permission to make any of these settings less strict.”

Arturo Béjar, former Meta engineering director and whistleblower, testified that Instagram’s recommendation system effectively connects predators with minors.

“That’s when I first realized the executive team knows about the harm that’s falling on the product, and they’re choosing not to act on it,”

Béjar said.
“I don’t think we can trust Facebook and Meta with our kids.”

This article was sourced from theguardian

Advertisement

Related News