The New York Times and Reuters contributed to this report.
The New York Times is once again taking an AI company to court — this time, the target is fast-growing startup Perplexity AI.
In a lawsuit filed Friday in federal court in New York, the Times accuses Perplexity of copying, distributing and displaying millions of its articles without permission to power the company’s AI search and chatbot tools.
The paper says it has been contacting Perplexity for roughly 18 months, asking the startup to stop using its content unless they worked out a licensing deal. Instead, the complaint says, Perplexity kept right on ingesting and reusing Times material.
“Perplexity provides commercial products to its own users that substitute for The Times, without permission or remuneration,” the lawsuit says.
Perplexity, led by CEO Aravind Srinivas and founded in 2022 by former OpenAI and other tech veterans, did not immediately respond to a request for comment.
The Times isn’t just arguing basic copyright infringement. It’s also accusing Perplexity of trademark violations under the Lanham Act.
According to the suit, Perplexity’s system sometimes “hallucinates” — makes things up — and then falsely attributes those fabricated claims to The New York Times, even displaying them alongside the paper’s registered trademarks and logos. That, the Times says, misleads users and harms the credibility of its brand.
On the copyright front, the complaint says Perplexity:
- Scrapes and copies Times content, including paywalled stories;
- Uses that material to train and run its AI products;
- Sometimes reproduces large chunks or even entire articles;
- Then serves that information back to users in a way that directly competes with the Times’ own offerings.
The key argument: this isn’t “fair use” or some abstract training debate — it’s a commercial product that, in the Times’ telling, replaces reading the original journalism.
This is hardly an isolated fight. The Perplexity suit is just one more chapter in a rapidly growing wave of AI copyright cases.
- Over the last four years, more than 40 lawsuits have been filed by authors, publishers, and other copyright holders against AI companies.
- In August 2024, Dow Jones — owner of The Wall Street Journal and The New York Post — also sued Perplexity over similar claims.
- In October, social media platform Reddit sued Perplexity and others, saying they unlawfully scraped its data to train AI tools.
The Times itself is already deep into another high-profile case. In December 2023, it sued OpenAI and Microsoft, arguing that those companies used millions of Times articles to train ChatGPT and related systems without paying for the privilege. OpenAI and Microsoft have disputed those claims.
And Perplexity isn’t alone on the defensive side either. AI rival Anthropic recently agreed to pay authors and publishers $1.5 billion after a judge found it had illegally downloaded and stored millions of copyrighted books while training its models.
Most of these cases are still winding through the courts, and the legal rules around AI training and “fair use” remain very much in flux.
At the core of the Times’ argument is a familiar critique of the current generative-AI boom: that many of these tools exist because they quietly piggyback on other people’s work.
The lawsuit says Perplexity’s business model depends on:
- Scraping news sites and other publishers, including paywalled content;
- Copying and storing that material in bulk;
- Using it to fuel AI systems that answer user questions directly, without sending readers back to the original sources.
That last point is key. If users get news summaries, analysis, and even full article equivalents straight from an AI assistant, there’s less incentive to click through to the original website — and less ad or subscription revenue for the publisher that created the reporting in the first place.
At the same time it’s suing some AI players, the Times is also cutting deals with others.
In May, the paper signed its first big AI licensing agreement with Amazon. Under that multi-year deal:
- Amazon can use Times content — including material from NYTimes.com, The Athletic, and its food and recipe site — in its own AI platforms.
- Times journalism can be used to train Amazon’s AI models and appear inside Amazon’s products.
No dollar figures were disclosed, but the move signaled how some publishers see a path to getting paid rather than simply scraped.
Other news organizations have already struck similar licensing deals with OpenAI, Microsoft and others, betting they can negotiate revenue instead of battling everything out in court.
Perplexity’s product sits right in the middle of this fight. The San Francisco startup bills itself as a next-generation AI search engine, powered by the same kind of large language models that made ChatGPT famous. It’s designed to answer questions with full, conversational responses instead of blue links.
But that pitch — “we’ll read the internet and summarize it for you” — is exactly what’s triggering these lawsuits.
If courts start siding with publishers in cases like this one, AI companies could be forced into:
- Costly licensing deals;
- Changes in how they train their models;
- Or even limits on how they present answers built on copyrighted work.
If courts lean the other way, it could cement a new reality where scraping public and semi-public content to train AI is broadly legal — leaving publishers scrambling to adapt or cut friendly deals while they still have leverage.
For now, The New York Times vs. Perplexity joins a crowded docket of AI-era copyright battles. And until judges start issuing clear, consistent rulings, every new case like this one is both a legal fight and a warning shot to the rest of the AI industry.






The latest news in your social feeds
Subscribe to our social media platforms to stay tuned