Meta Platforms CEO Mark Zuckerberg arrives outside court in Los Angeles, California, USA, February 18, 2026, to take the stage at the trial of a landmark test case accusing Meta and Google’s YouTube of harming the mental health of children through their addictive platforms.
Mike Blake | Reuters
For the past three decades, internet giants have been able to avoid legal exposure for the content on their platforms thanks to laws that distinguish them from online publishers. But these safeguards appear to be weakening.
meta and googleThe companies that dominate the U.S. digital advertising market are defendants in a number of lawsuits that collectively challenge the long-held notion that there is legal protection for what appears on their sites, apps and services. Companies like TikTok and Snap are in the same predicament.
The common thread among the recent lawsuits is that they were created to circumvent Section 230 of the Communications Decency Act, passed by Congress and signed into law by President Bill Clinton in 1996. Enacted in the early days of the Internet, this law protects websites from being sued over content posted by users and allows websites to act as moderators without being responsible for the content that persists.
Last week, a New Mexico jury found Mr. Mehta liable in a child safety case, while a Los Angeles jury found Facebook’s parent company and Google’s YouTube at fault in a personal injury trial. These rulings come days after victims of notorious sex offender Jeffrey Epstein filed a class-action lawsuit against Google and the Trump administration over claims related to the improper disclosure of their personal information.
In their complaint, the plaintiffs allege that Google’s AI mode, which provides AI-powered summaries and links, is “not a neutral search index,” an explicit effort to assert that Google is more than just a platform that sits between users and the information they seek.
“Plaintiffs’ bar associations have been winning the fight against Section 230 through systematic and persistent litigation, but this has created disagreements and fissures in the court’s protections,” Eric Goldman, a law professor at Santa Clara University School of Law, said in an interview.

As the technology sector moves out of the era of traditional online search and social networking and into a world defined by artificial intelligence, the stakes are enormous. There, models designed by the largest platform owners offer a variety of conversational chats, photos, and videos, ranging from controversial to potentially illegal. While financial penalties have been minimal so far, with damages totaling less than $400 million between two rulings last week, the case establishes a troubling precedent for tech giants betting their future on AI.
“Technology companies have long used excuses to avoid meaningful action to protect users, especially children, from egregious harm, harassment and abuse, fraud and fraud,” Sen. Brian Schatz (D-Hawaii) said in March at a U.S. Senate Commerce Committee hearing related to the 30th anniversary of Section 230. “It’s not like they don’t even know what’s going on or why it’s happening. Doing anything about it hurts their bottom line, and as long as they have federal law as a shield, why bother?”
Mehta declined to comment for this story. Google did not respond to a request for comment. Both companies said they planned to appeal last week’s ruling.
“Complex question”
Politicians on both sides of the aisle have proposed all sorts of reforms to Section 230 over the years, and corporate executives have faced public scrutiny in Congressional hearings over alleged harm caused by their platforms.
During his first term, President Donald Trump supported tighter restrictions on social media companies, deeming them biased against him. And when Joe Biden was a presidential candidate in 2020, he told the New York Times Editorial Board that Section 230 “should be repealed” for tech platforms including Facebook, saying it “spreads lies that we know are false.”
Nadine Farid Johnson, policy director at Columbia University’s Knight First Amendment Institute, said of the legislative efforts: “It’s a very complex issue, and none of it is fully realized.”
But while the case stalls in Washington, D.C., plaintiffs’ lawyers are exploring other avenues to hold big tech companies accountable.
Meta Platforms CEO Mark Zuckerberg testifies before Los Angeles Superior Court Judge Carolyn Kuhl in a landmark testing lawsuit accusing Meta and Google’s YouTube of harming children’s mental health through their addictive platforms, February 18, 2026 in Los Angeles, California. Courtroom sketch taken on February 18, 2026.
Mona Edwards | Reuters
Last week’s verdict against Meta and YouTube is the first time a jury has found a social media platform liable for what plaintiffs’ lawyers alleged was intentionally manipulating the addiction of minors to its products. The case pursued not only what content the platforms carried, but also how they were designed.
The plaintiffs alleged that a combination of features such as autoplay, recommendation algorithms, notifications, and certain filters acted like a “digital casino” and caused serious mental health problems for the girl, who said she could not stop using the app.
A class action lawsuit against Google filed last week by a plaintiff using the pseudonym Jane Doe alleges that the company’s AI modes created unique summaries and links that exposed personally identifiable information (PII) of Epstein victims, including their names, phone numbers and email addresses.
Kevin Osborn, the plaintiff’s attorney in the case, told CNBC that the lawsuit was filed after Google denied a request to remove the victim’s contact information from AI Mode. Mr Osborne said the case needed to move quickly because of the speed at which information spread.
“We filed the application because we needed to act as quickly as possible to remove this information,” said Osborn, a partner at Ericsson Kramer Osborn in San Francisco. “I’m getting phone calls and death threats from strangers. It’s a nightmare.”
Osborn added that the timing was “coincidental” given Mehta’s loss in court last week, but said there is overlap in that all of these involve efforts on the part of the plaintiffs to circumvent Section 230. Osborne said that in his case, “this is an AI mode that comes up with its own content, and that’s something that hasn’t been investigated very thoroughly in court.”
Matthew Bergman, one of the plaintiffs’ attorneys in the Los Angeles case, testified before a Senate committee in March that the tech industry was relying on an overly broad interpretation of Section 230 to “avoid all possible liability simply because third-party content is found somewhere in the chain of causation of wrongdoing.”
Bergman said he carefully considered the Court of Appeals’ 2021 decision regarding claims regarding the role of the Snapchat feature in fatal car crashes. The court reversed its earlier decision to dismiss the case under Article 230, citing the plaintiffs’ arguments. Mr. Snap Negligent design encouraged young people to drive recklessly.
“I have outlined a very narrow legal theory that could legally allow certain cases brought by parents to proceed despite Section 230,” Bergman told lawmakers.
Evidence presented in Los Angeles strengthens the plaintiffs’ claims that Meta and YouTube executives knew about the harmful design of their products and failed to adequately address them. “The best way to prove our case is through their own documents,” Bergman said at a press conference about the lawsuit Monday.
In the Google AI Mode case, plaintiffs also pointed to design flaws related to the public display of personal information.
“Google knowingly provides PII in a manner designed, or at least substantially guaranteed, to incite harassment and fear,” the complaint states.
Mr. Osborne took that idea further.
“Google didn’t just provide the client’s email address,” he said. “They created a link so that when you’re reading the content and looking at the AI mode, you can just click a button and it will send an email directly to the (Epstein) survivor.”

This isn’t the first time Google has been sued over user interactions with its AI, and the issue has also caused legal problems for OpenAI, the creator of ChatGPT.
In early March, Jonathan Gabaras’ father filed a lawsuit against Google, claiming that the Gemini chatbot persuaded his son to perform a series of tasks, including causing a “catastrophic accident.” The younger Gabaras then committed suicide at Gemini’s direction, the suit alleges.
And in January, Google settled with a family who had filed a lawsuit alleging that its and Character.AI’s technology caused harm to minors, including suicide. Last year, OpenAI was sued by a family who blamed ChatGPT for their teenage son’s suicide.
supreme court?
Legal experts said an appeal from the latest case could go to the Supreme Court, which could decide whether companies should be protected by law against the claims.
David Green, senior counsel at the Electronic Frontier Foundation, called the ruling “a very preliminary decision” and said there is still no consensus on whether certain product features are protected by Section 230 or the First Amendment.
“Just labeling something as a design feature doesn’t mean anything,” Green says. “If it’s speech, it’s speech, and it’s protected by the First Amendment and, in some cases, by Section 230.”
Columbia’s Johnson said he is asking Congress to enact a more cautious approach that would allow tech companies to gain Section 230 protection as long as they meet certain conditions related to data privacy, platform transparency and other prerequisites.
“These questions will only become more difficult as platforms continue to up their algorithmic game and expand their use of generative artificial intelligence,” Johnson said. “Our concern is that this essentially becomes a game of whack-a-mole with each new iteration, with each new technological advancement that impacts the platform and the people who participate in it.”
If you are having suicidal thoughts or are in distress, please contact the Suicide & Crisis Lifeline (988) for support and assistance from a trained counselor.
WATCH: More lawsuits likely to follow Mehta ruling, says Harvard Law professor.

