Analytics Economy USA

Why AI Whistleblowers Keep Quitting — and What their Letters Really Say

Why AI Whistleblowers Keep Quitting — and What their Letters Really Say
Dmitrii Melnikov / Alamy
  • Published February 17, 2026

Business Insider and the Guardian contributed to this report.

Resignation letters from top AI researchers have stopped being private HR paperwork and turned into a genre. The past two years have produced a steady stream of dramatic posts on social feeds and long-form exit essays that read part manifesto, part confessional — and together they give a pretty grim picture of how some inside the industry see its direction.

Take Mrinank Sharma. He recently dropped a 778-word farewell on social, mixing poetry (he quotes Rainer Maria Rilke and Mary Oliver), bleak warnings about “AI-assisted bioterrorism,” and a blunt line about values not always steering decisions. He even appended a full William Stafford poem. It wasn’t your average HR exit note — it read like somebody trying to make sense of a moral crisis while walking out the door.

Those public goodbyes aren’t limited to personal grievances. They map recurring fault lines: safety researchers vs. product teams, caution vs. speed, and ideals vs. profit. Many of the signatories come from safety and alignment teams — the people tasked with making sure powerful models behave — and they often leave saying the company sidelines those efforts in favor of shipping features and chasing revenue.

That tension shows up across the scene. Big-name departures from labs like OpenAI — including the high-drama week when Sam Altman was briefly ousted — and talent turnover at outfits such as xAI have kept headlines busy. Even when people move between firms — for example, researchers who leave one lab then join another — their exit letters often read like moral audits of the companies they left.

Sometimes those letters are clear and urgent. Miles Brundage wrote that frontier labs and the world aren’t ready for AGI and quit; others like Dylan Scandinaro and Daniel Kokotajlo have issued similar alarms about “extreme” or “irrecoverable” harms if caution is abandoned. Then there are departures like Zoë Hitzig’s New York Times op-ed, warning that adding ads to chatbot interfaces risks turning intimate user exchanges into monetized fodder — a Facebook-style playbook for manipulation.

There’s a pattern: most public quits come from safety folks, not product teams. That’s telling. It suggests companies are prioritizing growth and product traction over slower, thornier safety work. And when that happens, people whose job is to worry about worst-case outcomes start to feel compromised — and eventually write about it.

The letters do other things, too. They’re part whistleblow, part résumé, and part argument. They give readers a peek at how these researchers think about their work: a mix of responsibility, pride, and, sometimes, disillusionment. Some leave to join rival startups or new institutes (for instance, Brundage moving on to run AVERI). Others drift into policy, think tanks, or quieter academic lives. A few, like Sharma, even talk about stepping completely away.

Still, the letters have limits. They often focus on potential, existential threats — AGI, systemic manipulation, irreversible harms — and less on the everyday ways AI is reshaping life right now: surveillance, hiring algorithms, spam and misinformation, automation of labor, or tools that make harmful systems more efficient. In short, a lot of the public drama centers on “what might happen” rather than “what is happening,” which makes the warnings both potent and sometimes abstract for the wider public.

That partly explains the genre’s power and its weakness. These notes are dramatic and media-friendly, which helps surface real problems. But they can also serve as performance: a moral signal that might double as a networking move or a steppingstone. Not every resignation is a full-throated indictment; some are thoughtful critiques, some are anguished, and some are thinly veiled cover letters.

What should readers take away? First, these letters matter because they reveal recurring corporate choices: speed to market beats slow safety, and commercialization warps original ideals. Second, they underline the need for independent oversight — regulation, public accountability, and safety standards that don’t rely solely on companies’ goodwill. And finally, they remind us that the world shaped by AI will be decided by people — engineers, executives, investors and regulators — whose incentives often point in different directions.

If there’s one honest line that keeps popping up in these departures, it’s a complaint about trust: researchers need to trust leadership to put safety first if they’re going to keep working on tech with world-altering potential. When that trust breaks, they don’t just quit quietly — they write, they warn, and they make the industry look in the mirror. Whether those public letters spark reform, or just become another item in the tech press cycle, is something the rest of us should probably pay attention to.

Wyoming Star Staff

Wyoming Star publishes letters, opinions, and tips submissions as a public service. The content does not necessarily reflect the opinions of Wyoming Star or its employees. Letters to the editor and tips can be submitted via email at our Contact Us section.