Big Tech Lobbies Against Child Safety

The exponential growth of online child sexual exploitation, cyberbullying, and teen mental health issues is directly linked to the growth of social media. These negative impacts on children and teens are primarily due to algorithms designed to send streams of unsolicited materials that entice children into online engagements with strangers, exploit their personal data, and keep them online for longer periods of time at the same time that age verification features remain insufficient, enabling adults and children to pretend to be different ages – increasing the rates of child sexual abuse.

Online child sexual abuse material (CSAM) has exploded from fewer than one million reports in 2013 to nearly 36 million in 2023 – containing over 100 million images and videos. Meta was responsible for 85% of these reports. Many of these involve children 10 years old or younger.

Yet, in 2024, Meta began encrypting its platforms, potentially hiding tens of millions of reports, preventing law enforcement action, cloaking the actions of child predators, and making children more vulnerable. Alphabet and Apple do little to no scanning or reporting at all.

According to a Wall Street Journal investigation, Instagram’s algorithms “connect and promote” a vast pedophile network. Moreover, The New York Times reported that child abuse apps were easily found on Google and Apple app platforms.

Forty-two states and over 1000 families have filed lawsuits against Meta (Facebook, Instagram, Messenger), Alphabet (Google, You Tube), and other tech companies for using advanced algorithms to intentionally target and addict young users, resulting in harmful and even fatal physical and mental health outcomes. 

The UK, EU, and Australia have imposed new online child safety laws. Utah has passed the first app store age-verification law, and multiple states are attempting a similar approach. 

Big Tech has responded to growing demands for accountability by releasing a flurry of parental controls, privacy settings, and warning and reporting mechanisms. The new controls, while laudable, are not remotely scalable enough to meet the problem and even appear to shift responsibility from their algorithms to parents. In response to shareholder questions, Apple and Meta both acknowledged that they do not collect data on the effectiveness of these controls.

Big Tech’s other response has been to dramatically increase lobbying efforts by investing millions of dollars opposing child safety legislation in the U.S. and abroad.  Alphabet and Meta reportedly took the lead in derailing the Kids Online Safety Act – one of the few issues in Congress that had almost unanimous bipartisan support. Apple has opposed app-based age-verification legislation, and Alphabet has sought carve-outs for mandatory age-verification legislation. Alphabet and Meta invested millions to oppose the New York Stop Addictive Feeds Exploitation (SAFE) for Kids Act and the New York Child Data Protection Act.  Alphabet has also lobbied against child safety-related legislation in Australia and the UK, and technology companies have been accused of illegal lobbying via "front groups" in the EU against legislation that would, among other things, curtail the spread of illegal content online and restrict targeted ads.

If there were profits to be made from protecting children online, these problems would likely be solved already.  But there is not, so children continue to be collateral damage for increased advertising revenue and app sales.

Shareholder support for child safety resolutions has been strong as is evident by Proxy Impact's 2024 resolution at Meta earning 59% of the independent vote, representing over 925 million shares and more than $439 billion in stock value. This year, a similar child safety resolution will go to a vote at both Meta and Alphabet, along with a new resolution addressing lobbying and child safety at Alphabet.

 

Michael Passoff
CEO, Proxy Impact