In recent months, Forbes has published a series of deeply reported stories about Eric Schmidt’s stealth drone project. They’re a fascinating window into the former Google CEO’s budding new career in Defense contracting — and you can read them here. Last week, reporters who worked on the stories reported something else: an AI-generated article-length summary of their work. The post contained sections of text copied verbatim from the paywalled Forbes stories, as well as a lightly modified version of a graphic created by the Forbes design team. It had been created with Perplexity Pages, a tool recently introduced by the AI search engine of the same name — a buzzy and popular product with a billion-dollar valuation. The post was featured on Perplexity’s Discover page, sent out via push notification to its users, incorporated into an AI-generated podcast, and released as a YouTube video. The article had garnered more than 20,000 views, according to Forbes, but didn’t mention the publication by name, instead crediting the original stories alongside other aggregations of their material in a series of small, icon-size links.
Forbes publicly objected, reporting that Perplexity “appears to be plagiarizing journalists’ work,” including but not limited to its own. In a separate story, editor and chief content officer Randall Lane made his position clear: “Perplexity had taken our work, without our permission, and republished it across multiple platforms — web, video, mobile — as though it were itself a media outlet.”
In response, Perplexity CEO Aravind Srinivas told Forbes that the Pages product has “rough edges” and that “contributing sources should be highlighted more prominently.” In a later post on X, he took a more defensive position:
To anyone who has ever seen real internal publishing metrics, this was an obviously absurd claim. Forbes reporter Alexandra S. Levine confirmed, in response, that in fact traffic from Perplexity accounted for just 0.014 percent of visitors to Forbes — making it the “54th biggest referral traffic source in terms of users.” Perplexity is getting a lot from Forbes, and Forbes is getting basically nothing back — a significant downgrade from the already brutal arrangements of search, social media, and cross-publication human aggregation. With pages, Perplexity wasn’t just offering to summarize sources into a Wikipedia-ish article for personal consumption. The Pages feature is intended to “turn your research into shareable articles, helping you connect with a global audience.” (The Forbes summary was in fact “curated” by Perplexity itself.) It’s an attempt at automated publishing, complete with an internal front page and view counts.
Pages isn’t Perplexity’s main product, but rather a new feature that extends its basic premise. Perplexity is, to most of its users, a Google alternative with a simple and appealing pitch: In response to queries, it provides “accessible, conversational, and verifiable” answers using AI. Ask it why the sky is blue, and it will give you a short summary of Rayleigh scattering with footnote links to a few reference websites. Ask it what just happened with Hunter Biden, and it will tell you the president’s son “has been found guilty of lying on a firearm application form and unlawfully possessing a gun while using drugs,” with footnoted links to the Washington Post (paywalled) and NBC News (not).
Perplexity occasionally makes things up or runs with a summary based on erroneous assumptions, which is the first problem with LLM-powered search-style tools: Where conventional search engines simply fail to find things for users or find the wrong things, AI search engines, or chatbots treated like search engines, will sometimes just fill the gaps with nonsense. On its own terms, though, the product works relatively well and has real fans. Its interface is clean, it isn’t loaded with ads, and it does a decent job of answering certain sorts of questions much of the time. Its answers feel more grounded than most chatbot responses because they often are. They’re not just approximations of plausible answers synthesized from the residue of training data. In many cases, they’re straightforward summaries of human content on the actual web.
For people and companies that publish things online for fun or profit, Perplexity’s basic pitch is also worrying. It’s scraping, summarizing, and republishing in a different context; to borrow a contested term of art from publishing, it’s engaging in automated “aggregation.” As Casey Newton of Platformer wrote after the company announced plans to incorporate ads of its own, Perplexity, which is marketed as an AI search engine, can be also reasonably described as a “plagiarism engine.” Automated publishing, in previous contexts, is better known by another name: spam.
Again, Perplexity is growing fast, raising money, and is valued at more than a billion dollars. It has loyal users who are undeterred by the way it works, largely because it often does work — it’ll give you something close to what you’re asking for, quickly. That Perplexity’s search responses are short and presented individually leaves Perplexity with a few plausible defenses: It’s not much different from Google Search blurbs; it could conceivably send visitors to original content; it’s sort of akin to a personalized front page for the web populated with enticing blurbs; it’s no different from Wikipedia, which sources its material from the world of others; it’s no different from low-value human aggregation, which many in the aggrieved media have been practicing for decades. These defenses were never terribly convincing. Perplexity encourages users to ask follow-up questions, which leads it to summarize more content until it’s basically written an entire article anyway — as with chatbots, the product encourages users to stay and chat more, not to leave. They’re also defenses that Perplexity itself, at least until recently, didn’t see the need to mount.
In February, when the company was breaking through into the mainstream, I asked Srinivas, a former OpenAI employee, where he thought Perplexity fit into the already collapsing ecosystem of the open web. His responses were candid and revealing. He described Perplexity as a different way not just of searching but browsing the web, particularly on mobile devices. “I think on mobile, the app is likely to deprecate the browser,” he said. “It reads the websites on the fly — the human labor of browsing is being automated.” For most answers, Perplexity would provide users with as much information as they need on the first try.
I asked directly how publishers who either depend on or are motivated by visitors to their sites — to make money from them with ads, subscriptions, or simply to build a consistent audience or community of their own — should think about Perplexity, and he suggested that such arrangements were basically obsolete. “Something like Perplexity ensures people read your content, but in a different way where you’re not getting actual visits, yet people get the relevant parts of your domain,” he said. “Even if we do take your content, the user knows that we took your content, and which part of the answer comes from which publisher.”
I suggested this was precisely the concern of people whose content Perplexity was relying on — that Perplexity’s unwitting content providers can’t survive on credit alone. Then Srinivas, who for much of the interview spoke thoughtfully and precisely about the state of AI and his company’s strategy for taking on Google, started thinking out loud as if encountering an interesting new problem for the first time from a perspective he hadn’t previously needed to consider. “We need to think more clearly about how a publisher can monetize eyeballs on another platform without actually getting visits,” he said. In a world where readers encounter publications as citations on Perplexity or in Google’s AI answers, “you can argue brand value is being built even more. We should figure out a way to measure, like, actual dollar value that’s obtained from a citation in a citation-like interface, so that an advertiser on your domain can still figure out what to pay.”
There was no plan, in other words — as Perplexity sees it, this isn’t really their problem to solve, even if they’re helping to create it. In the AI-rendered future, publishing as it exists today makes no sense. Sure, this generation of AI tools is dependent in multiple ways on scrapable public content created under different cultural and commercial circumstances, but if the economy of the web collapses, and services like Perplexity don’t have much material to summarize … well, they’ll cross that bridge when they come to it.
In the time since the interview, Perplexity has introduced Pages, suggested that it would get into advertising itself, and shifted its defense to talking about traffic that it doesn’t actually send. The company isn’t alone in this approach. Google’s AI overviews, which produce and cite content in a fashion similar to Perplexity, have been similarly criticized as plagiaristic and parasitic, not to mention sometimes glaringly wrong. In response, Google, which (mostly) successfully fended off related criticism for over-aggregation when it was relatively young, has claimed to an audience of publishers that has no reason to believe it, and very much doesn’t, that its users are actually very keen to tap those little citations on its synthetic summaries. On Wednesday, after the Forbes reports, Perplexity placed a story with Semafor that claimed it was “already working on revenue-sharing deals with high-quality publishers” at the time of the controversy and that it would unveil details soon.
Perplexity’s about-face is at least some sort of acknowledgment that there is a problem, here with consequences that could eventually undermine not just publishers but the AI firms themselves. It helps explain why OpenAI, which is both a much larger company and a bigger target for criticism and legal action than Perplexity but isn’t nearly as entrenched as Google, has been pursuing deals with media companies, including New York parent Vox Media, for both training on and displaying their content. It also sheds some light on why publishers, with some exceptions, have been so keen to accept its terms: because the future first envisioned by AI firms didn’t include them at all.
More Screen Time
- How Podcasting Became the New TV
- Amazon Just Built a Temu Clone. Why Isn’t It More Fun?
- Why the Government’s Google Breakup Plan Is Such a Big Deal