Google, Meta, Snap and TikTok must face teen addiction claims, federal judge rules

News Room
By News Room 6 Min Read

Major social media companies must face allegations that their services addicted teen users and caused other mental health harms after a federal judge on Tuesday denied a motion to dismiss the bellwether lawsuit filed by a wave of consumer plaintiffs.

The decision is a blow to tech giants Google, Meta, Snap and TikTok, who claimed the case should be tossed out on First Amendment grounds and the argument that they are immune from liability under a hot-button legal shield known as Section 230 of the Communications Decency Act.

In her 52-page ruling, District Judge Yvonne Gonzalez Rogers held that while Section 230 and the First Amendment do shield the social media companies from some of the plaintiffs’ claims, including certain product defect allegations, others should be allowed to proceed.

For example, Gonzalez Rogers permitted several product liability claims to move ahead that allege the companies failed to implement effective parental controls, and that they allegedly did not do enough to verify the ages of young users. Another claim relates to the availability of so-called image filters that allow users to change their onscreen appearance, and that critics say promote unhealthy body-image expectations.

In addition, Gonzalez Rogers allowed a claim to move ahead that alleges the companies negligently violated a signature US children’s privacy law by collecting the personal information of kids without getting a parent’s express consent.

The ruling paves the way for hundreds of plaintiffs to continue their case against the tech companies, and could indirectly lift the prospects for a bevy of similar suits filed by dozens of state attorneys general last month against Meta. Those suits claim Meta harmed the mental health of teens through features such as persistent mobile notifications that keep users hooked on its apps.

Meta and TikTok didn’t immediately respond to a request for comment. Snap declined to comment.

“Protecting kids across our platforms has always been core to our work,” José Castañeda, a Google spokesperson, said in a statement. “In collaboration with child development specialists, we have built age-appropriate experiences for kids and families on YouTube, and provide parents with robust controls. The allegations in these complaints are simply not true.”

Tuesday’s order is a “significant victory for the families that have been harmed by the dangers of social media,” Lexi Hazam, Previn Warren and Chris Seeger, the lead attorneys for the consumer plaintiffs, said in a joint statement. “The court’s ruling repudiates Big Tech’s overbroad and incorrect claim that Section 230 or the First Amendment should grant them blanket immunity for the harm they cause to their users. The mental health crisis among American youth is a direct result of these defendants’ intentional design of harmful product features. We will continue to fight for those who have been harmed by the misconduct of these social media platforms, and ensure they are held accountable for knowingly creating a vast mental health crisis that they still refuse to acknowledge and address.”

The decision also represents a rare finding about the limits of Section 230, a 1996 federal law that has been commonly invoked by websites to nip content moderation lawsuits in the bud.

A cornerstone of internet law, Section 230 grants blanket immunity to “interactive computer services” and their users from lawsuits that may arise due to content posted by other users of those platforms. Defenders of Section 230 have credited the law for allowing the early internet to flourish free from litigation that might have prevented the development of social media, email, forums and other online communications.

Though it has historically been interpreted expansively by the courts, in recent years Section 230 has become a bipartisan punching bag that lawmakers and other critics say let tech companies off the hook too easily for their content moderation choices.

Gonzalez Rogers said Tuesday that Section 230 does shield the tech platforms from claims that try to hold the companies accountable as publishers of other users’ speech.

For example, she said, the companies will not have to face claims they violated the law by implementing infinite news feeds or by using algorithms to increase user engagement.

But, she added, claims that don’t relate to how platforms handle other users’ speech can go forward and aren’t protected by Section 230, including the federal children’s privacy claim and the claim about image filters (because the filters don’t create user speech by themselves), among others.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *