Proxy Preview

View Original

Meta Fails to Address Online Child Safety Risks

See this content in the original post

The internet was not developed with children in mind.

There is no better example of this than Meta, which intended its platforms as a place where people could connect with friends and family. Instead, they have become a dangerous playground for children.

Meta is the world’s largest social media company, used by billons of children and teenagers. Its platforms—including Facebook, Instagram, Messenger and WhatsApp—have been linked to numerous child safety problems including a mental health crisis for young people, data privacy violations, age verification failures, cyberbullying, self-harm and child sexual exploitation, grooming and trafficking.

Recently redacted court documents show Meta knew its platforms have harmful effects but disregarded the issue and chose “aggressive tactics” to addict kids to social media just to promote growth. Court documents further state that CEO Mark Zuckerberg was personally warned that

“We are not on track to succeed for our core well-being topics (problematic use, bullying & harassment, connections, and SSI), and are at increased regulatory risk and external criticism. These affect everyone, especially Youth and Creators; if not addressed, these will follow us into the Metaverse.”

Facebook whistleblower Francis Haugen similarly claimed that Meta knew of Instagram’s negative impacts on teen self-image, increased rates of depression and anxiety and links to suicidal thoughts. This led to a 2021 Congressional hearing about the company’s impact on children and mental health.

Instagram also encourages widespread cyberbullying, with one study finding “nearly 80% of teens are on Instagram and more than half of those users have been bullied on the platform.”

Meta’s platforms also hold the dubious distinction of hosting 92 percent of the nearly 29 million reported cases of online child sexual abuse materials. Its plan to provide end-to-end encryption on its platforms has caused a storm of criticism because this will effectively make many of these cases invisible and allow predators to operate with impunity, preventing law enforcement help for victims.

Even the most basic child safety precaution–age verification–has largely failed since kids and adults both regularly show up on sites where they are not allowed. Critics already have roundly criticized Meta’s foray into the metaverse because minors have easy access.

Meta faces increasing regulatory, reputational and legal risks it has yet to abate. The United States, European Union, United Kingdom, Australia and California, among others—have new laws or pending legislation about social media and child safety. U.S. lawsuits are challenging Section 230 - social media's 'immunity' shield. Meta also faces mounting litigation about child safety from a wide range of litigants who include state governments, schools, individuals and governments around the world.

Meta says it does not tolerate child exploitation or bullying and is developing new child safety features for selected products and age groups. Yet it still has not set any publicly available, company-wide child safety or harm reduction performance targets. Investors and stakeholders cannot assess the impact of the company’s efforts. Shareholders therefore want Meta to provide quantitative metrics that will help them examine company performance globally regarding child safety, and whether it is actually reducing well-known harms to children on its platforms.

Michael Passoff
CEO, Proxy Impact