Forbes, National Institute of Standards and Technology, CNBC, Reuters, Politico contributed to this report.
The US government is stepping deeper into the AI race – this time with a front-row seat before new models even hit the public.
Under fresh agreements, the Center for AI Standards and Innovation (CAISI) will get early access to artificial intelligence systems from Google DeepMind, Microsoft and xAI. The idea is simple: test the tech before everyone else gets their hands on it – and keep testing after it’s released.
That means pre-launch evaluations, post-release checkups, and a steady stream of research aimed at figuring out just how powerful – or risky – these systems really are.
The move builds on earlier partnerships with OpenAI and Anthropic, but the scope is widening. Faster models, bigger stakes, more urgency.
Behind the scenes, the White House is weighing something bigger: a formal process to review AI models before they go public. Think of it as a checkpoint for cutting-edge tech. The plan could come via executive order and would bring together government officials and Silicon Valley heavyweights to hash out how oversight should work.
It wouldn’t be happening in a vacuum. The UK is already building a similar system, tasking regulators with stress-testing AI tools for safety and security risks.
Recent events seem to have lit a fire. A powerful model from Anthropic raised eyebrows in Washington after showing it could spot vulnerabilities across major software systems – potentially useful, but also a cybersecurity nightmare if misused. That sparked direct talks between company leaders and government officials, even as tensions flared over national security concerns.
Earlier this year, Donald Trump assembled an AI advisory group packed with tech elites, including Mark Zuckerberg, Larry Ellison, Jensen Huang and Michael Dell. The mission: help shape a national AI strategy that overrides the patchwork of state-level rules.
Trump hasn’t always been keen on heavy regulation. He’s previously argued that AI needs room to grow, warning against rules that might choke off innovation. But the tone has shifted. Concerns about cyberattacks, military use, and global competition are harder to ignore.
At the same time, the Pentagon is already pulling AI deeper into its operations, striking deals with companies like Amazon, Nvidia and SpaceX to integrate AI into classified systems.
Put it all together, and the direction is clear. Washington isn’t just watching the AI boom anymore – it wants a say in what gets built, how it behaves, and when it’s ready for the world.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned