Court Weighs Expanding Product Liability Law to Algorithms

Court Weighs Expanding Product Liability Law to Algorithms

NEW YORK — Four of the world’s largest tech companies—Google, Meta, Amazon, and Reddit—are defending themselves in a New York appellate court against claims they helped incite a deadly mass shooting through their algorithm-driven content delivery systems.

The companies face consolidated lawsuits stemming from the May 14, 2022, massacre at a Tops Friendly Market in Buffalo, where 10 Black individuals were killed in what authorities labeled a racially motivated hate crime. Survivors and families of the victims argue that the companies’ algorithms, which promote content based on user engagement, contributed to the shooter’s radicalization.

The case presents a novel legal theory: that algorithms can be treated as products under New York's product liability law. If successful, this approach could reshape how courts across the country interpret liability in the digital age.

A Question of Products

The legal issue centers on whether intangible digital tools—specifically algorithms—can be considered "products" for the purpose of strict liability claims.

To date, New York courts have not applied product liability law to non-physical objects. Plaintiffs in the case argue that the companies’ algorithms are central to how harmful content is delivered to users, and therefore should be treated like any other defective product.

“The position plaintiffs are advancing in this case is far outside of any type of strict product liability claim that New York courts have recognized in the past,” said Thomas Kurland, a product liability attorney at Patterson Belknap Webb & Tyler LLP. “I would be surprised if the law swung so far in that direction in a single case.”

Section 230 and Platform Responsibility

The tech firms argue that they are shielded by Section 230 of the federal Communications Decency Act, which protects online platforms from being treated as publishers of third-party content.

However, plaintiffs counter that they are not suing over the content itself, but rather the automated systems that prioritize and deliver the content. This distinction, they argue, places the algorithms outside the protection of Section 230.

Kate Ruane, director at the Center for Democracy and Technology’s Free Expression Project, warned that separating algorithms from Section 230 protections could lead platforms to suppress large volumes of content. It will "suddenly make many, many pieces of content open to liability,” she said.

Establishing Causation

Another legal hurdle for the plaintiffs is proving proximate cause—linking the platforms directly to the shooting.

The shooter, 18-year-old Payton Gendron, allegedly consumed racist content on platforms like Discord and Facebook before carrying out the attack. Plaintiffs claim these platforms are defective by design.

Paul Barrett, former deputy director at NYU’s Stern Center for Business and Human Rights, expressed skepticism. “There is an actor whose actions are more obviously the proximate cause of the terrible harm,” he said, referring to Gendron.

What Comes Next

The Supreme Court Fourth Appellate Department could reject the plaintiffs’ attempt to broaden New York product liability law. However, legal experts suggest the court may instead allow further discovery before making a final determination.

“If New York State basically closes the courthouse door to these kinds of arguments, that sends a signal,” Barrett said.

Oral arguments for the cases, Salter v. Meta Platforms Inc. and Patterson v. Meta Platforms Inc., are scheduled for May 20, 2025.