BBC and Axios contributed to this report.
An AI safety researcher at Anthropic dropped a bombshell resignation this week — then announced he’s swapping models for meter. In a stark post on X, Mrinank Sharma said he’s leaving the company he helped build safeguards at because “the world is in peril,” citing not just AI but bioweapons and a knot of other crises. And then, oddly, he added he plans to study poetry, move back to the UK and “become invisible.”
Sharma ran a team focused on AI safeguards at Anthropic — the company behind the Claude chatbot that markets itself as more safety-minded than rivals. In his farewell he listed the kind of stuff you expect from someone who worries about the long tail of tech risk: why generative AIs flatter and pander to users, how assistants could make people “less human,” and the terrifying, fringe-yet-real risks of AI-assisted bioterrorism. He said he enjoyed the work, but “the time has come to move on.”
“I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote, adding that even at a safety-branded company like Anthropic there are pressures to “set aside what matters most.”
Cue the poetry. Rather than join another lab or become a full-time whistleblower, Sharma says he’ll study writing and poetry and retreat to Britain for a bit. It’s a move that read as equal parts burnout, protest and personal recalibration — a snapshot of how some inside the AI world are responding to rapid advances they view as existential.
Sharma’s departure comes amid a flurry of uneasy headlines in the AI world. This week an OpenAI researcher also resigned, publicly warning about the ethics of running adverts inside ChatGPT — a move that’s caused debate about commercialization versus safety. Anthropic itself has been running commercials calling out OpenAI’s ad decision, painting the rivalry as partly about principles.
Anthropic pitches itself as a “public benefit corporation” focused on reducing frontier risks: alignment, misuse, and power concentration. It has published reports flagging how its tools could be twisted for harm — from cyberattacks to other malicious uses. Still, the company hasn’t been immune to controversy: last year Anthropic agreed to a big settlement with authors who claimed their work was used to train models without permission.
The exits this week — and a handful of other recent high-profile departures across the sector — highlight a widening fault line. On one side are the technologists and executives pushing product, growth and monetization. On the other are safety-focused researchers who increasingly worry that corporate pressure, investor priorities and competitive dynamics are eroding the very guardrails the industry says matter.
Reality check: leaving these deep-pocketed startups doesn’t usually mean walking away poor. Top safety staffers typically depart with stock and compensation in hand. But public resignations with moral language pack a different punch: they spark debate, grab headlines and nudge regulators and investors to pay closer attention.
Sharma’s note echoed other recent warnings from inside the field — researchers publicly fretting about ad-driven engagement algorithms, about mission drift, and about whether today’s labs can actually keep their safety promises. Whether the industry can reconcile rapid product rollouts with careful, value-led governance is the question his resignation puts on the table.
And then there’s the human element: a senior scientist, worn down or alarmed enough to trade cutting-edge work for late-night stanzas.
“Become invisible,” he wrote.
Maybe that’s exhaustion. Maybe it’s a plea. Or maybe it’s a reminder that when the future feels too big and too dangerous, even those building it sometimes need to step back, breathe, and, apparently, write a poem.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned