Economy Opinion Politics Science USA

EXCLUSIVE: How the US Government Is Shaping the Future of AI Development

EXCLUSIVE: How the US Government Is Shaping the Future of AI Development
President Donald Trump gives remarks in the Roosevelt Room where he announced a new AI initiative at the White House in Washington D.C., on Tuesday, January 21, 2025, as Softbank CEO Masayoshi Son, Oracle CEO Larry Ellison, and OpenAI CEO Sam Altman look on (Aaron Schwartz / Sipa USA)

In 2025, the United States continues to deepen its strategic commitment to artificial intelligence (AI), positioning the technology as a core pillar of national security, economic growth, and global competitiveness.

Under the leadership of President Donald J. Trump, the federal government has launched a series of initiatives to accelerate domestic AI development, remove regulatory barriers, and secure international partnerships that support US interests in the increasingly contested digital domain.

Artificial intelligence has long been identified as a transformative technology with implications across sectors—ranging from defense and healthcare to finance and manufacturing. The Trump administration has made clear that AI is not only an economic asset but also a national security imperative. The administration’s 2025 strategy builds on foundations laid by prior administrations, such as the 2019 Executive Order on “Maintaining American Leadership in Artificial Intelligence,” now updated to reflect new global dynamics and technological advances.

In January 2025, President Trump signed an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which directs federal agencies to reduce regulatory obstacles for private sector AI research and deployment. The administration argues this approach will accelerate innovation and maintain US leadership in the face of growing technological competition from China and other global actors.

The White House’s AI agenda is built around two key pillars: deregulation and education. The administration has moved to repeal certain diffusion rules that previously limited the export and deployment of advanced AI tools, particularly those with dual-use (civilian and military) applications. Critics warn these moves could create national security risks, but supporters argue they enhance US firms’ global competitiveness.

Parallel to deregulation, the administration has emphasized domestic capacity-building. Recent executive actions have expanded AI education programs in K-12 schools and technical institutions. A series of initiatives aims to prepare American youth and mid-career workers for an AI-integrated economy, focusing on STEM education, apprenticeships, and partnerships with tech companies.

A major development in 2025 has been the administration’s push to integrate AI diplomacy into broader geopolitical strategy. President Trump recently secured a $600 billion investment commitment with Saudi Arabia, which includes collaboration on AI research and infrastructure. These agreements, also involving the UAE, reflect a growing US interest in shaping AI development abroad, particularly in regions where Chinese influence is expanding.

Jesse Kirkpatrick, Associate Research Professor, Co-director of the Mason Autonomy and Robotics Center (George Mason University)

However, this international AI diplomacy is not without controversy. National security experts have expressed concern over the technology transfer risks inherent in deals with Gulf states, especially as both Saudi Arabia and the UAE maintain technology ties with China. The administration argues that such partnerships are vital to counterbalancing Beijing’s global AI ambitions and creating new markets for US technology.

Domestically, AI policy is increasingly intersecting with politics. President Trump has emphasized AI as a tool for “restoring American greatness” while defending deregulation as a safeguard against what he calls “bureaucratic strangulation.” In a recent statement, he pledged to “ban big, burdensome AI regulations that kill American jobs,” reflecting concerns about automation-induced job displacement and government overreach.

This stance has drawn criticism from labor advocates and technologists who caution that unchecked AI expansion could exacerbate inequality and disrupt traditional employment sectors. Despite these tensions, the administration remains firm in promoting a light-touch regulatory environment, paired with targeted workforce investments.

To dive deeper into the political side of AI development, Wyoming Star spoke with Jesse Kirkpatrick, a research associate professor and the co-director of the Mason Autonomy and Robotics Center at George Mason University.

Wyoming Star: From your perspective, what are the core strategic interests driving US government investment in AI today? What spheres of AI use are considered the most promising?

Jesse Kirkpatrick: The US government sees AI as a linchpin of economic competitiveness, national security, and scientific leadership. Strategic interests span defense modernization, securing supply chains, bolstering critical infrastructure, and maintaining a technological edge over adversaries.

Promising spheres include military decision support, biomedical research, logistics optimization, cybersecurity, and climate modeling—but increasingly also public-sector applications like benefits processing and disaster response.

Wyoming Star: How does the US government’s approach to AI compare to historical investments in technologies like nuclear energy or space exploration? Is it mostly a question of reputation and prestige?

Jesse Kirkpatrick: There are clear parallels. Like past technologies, AI investment blends economic interest, strategic deterrence, and prestige. But unlike nuclear energy or space tech, AI is decentralized—it can be deployed by corporations or hostile actors just as easily as governments. So the stakes aren’t just reputational; they’re systemic. It’s not just about who leads—it’s about ensuring the rules of the road are democratic and safe.

Donald Trump with Mohammed Bin Salman in Riyadh during his trip to the Gulf (Brian Snyder / Reuters)

Wyoming Star: In light of recent reports, what should we make of the growing US collaboration with the UAE on AI infrastructure? Is this a matter of tech diplomacy, or are there deeper geopolitical stakes at play? Does this partnership risk legitimizing techno-authoritarian governance models, or is it a pragmatic move to counterbalance China’s influence?

Jesse Kirkpatrick: It’s both tech diplomacy and geopolitics. The partnership reflects a strategic calculus: strengthening ties in the Gulf to counterbalance China’s digital Belt and Road. But it also poses risks. These collaborations can inadvertently signal acceptance of governance models that lack transparency or safeguards for human rights and American values. The challenge is walking the line between pragmatism and principled engagement—ensuring our AI partnerships reflect democratic values, not just strategic interests.

Wyoming Star: You’ve written about the risks of AI hype. How does that hype intersect with the political narrative around US-China technological rivalry?

Jesse Kirkpatrick: Hype fuels urgency, and urgency can distort priorities. In the US-China tech race, AI is often framed as a zero-sum game, which incentivizes speed over safety and dominance over deliberation. This can lead to overpromising, underregulating, and sidelining sensible safeguards in favor of outcompeting adversaries. It’s crucial that policy decisions be grounded in realistic assessments—not in inflated expectations or fear-based narratives.

Wyoming Star: How might AI reshape alliances and soft power in the Global South, especially as countries weigh offers from US tech firms versus Chinese alternatives?

Jesse Kirkpatrick: AI offers a new axis of influence. Nations choosing between US and Chinese technologies are weighing not just price or performance, but values and alignment. The US has an opportunity to build trust by promoting open standards, capacity building, and smart AI governance. But if we treat AI as just another export market, we risk alienating partners.

Responsible AI deployment can be a powerful soft power tool.

Wyoming Star: Democrat-led calls for increased competition in Pentagon AI contracts suggest concerns about vendor lock-in. How do you see the role of democratic governance and market pluralism in national AI policy? How might democratic scrutiny or audit of AI systems used in public policy or national security realistically look?

Jesse Kirkpatrick: Democracy thrives on checks and balances. That should extend to AI. Calls for competition in Pentagon AI contracts reflect a healthy concern about monopolistic control over critical infrastructure.

Real democratic oversight would mean public AI audits, stronger procurement transparency, red-teaming for government-deployed systems, and mechanisms for community feedback—especially where AI affects civil liberties or frontline services.

Columbia Pictures

Wyoming Star: Given your focus on the ethical dimensions of emerging technologies, what are the dangers of subordinating ethical oversight to geopolitical urgency in AI policy?

Jesse Kirkpatrick: The danger isn’t just ethical—it’s strategic. If the US rushes AI deployment without adequate oversight, we risk undermining the trust and stability our institutions rely on, both at home and abroad. Responsible innovation strengthens American competitiveness by ensuring technologies are secure, reliable, and aligned with democratic values. Ethical oversight isn’t a constraint—it’s a force multiplier. It signals to allies, partners, and citizens that US leadership in AI is built on integrity, not just speed.

Wyoming Star: Is there a meaningful way to create international norms or ethical frameworks for AI that aren’t just reflections of great-power interests?

Jesse Kirkpatrick: Yes—and the US is well-positioned to lead this effort. Our longstanding commitment to the rule of law, transparency, and multilateralism allows us to promote global norms that are fair, inclusive, and enforceable. Initiatives like the OECD AI Principles and the G7 Hiroshima Process reflect broad consensus, not just American priorities. By engaging partners across sectors and regions, we can shape a digital future that reflects shared values—not just spheres of influence.

Norm-setting isn’t a soft power add-on—it’s core to securing a rules-based global AI ecosystem.

Wyoming Star: Looking ahead, what risks do you think policymakers are currently underestimating in the AI space? What legislation to regulate this technology without stifling innovation is long overdue?

Jesse Kirkpatrick: A key blind spot is systemic risk—how AI deployed across sectors might compound vulnerabilities in infrastructure, finance, or national defense. Policymakers also need to better understand the trade-offs between AI capability and control. What’s overdue is sector-specific legislation: clear rules for AI in healthcare, education, critical infrastructure, and government use.

We need enforceable standards for transparency, safety testing, and redress.

Regulatory clarity can accelerate responsible innovation by setting the guardrails developers and agencies need.

Wyoming Star: Do you think the US is prepared—legally, culturally, and economically—for the eventual integration of AI into most spheres of our lives—from healthcare to defense?

Jesse Kirkpatrick: The US is making real strides, but we’re not fully prepared yet.

Legally, we need frameworks that clarify accountability and protect rights in an AI-driven economy. Culturally, the public still needs stronger digital literacy to engage with AI systems confidently and critically. Economically, we must invest in workforce transitions, especially for industries where automation will be disruptive.

That said, the US has unique strengths—world-leading universities, vibrant innovation ecosystems, and a deep bench of civil society institutions. With the right investments and policies, we can lead the AI transition in a way that’s competitive, resilient, and aligned with American values.

Learn more about the economic and ethical sides of AI development here.

Joe Yans

Joe Yans is a 25-year-old journalist and interviewer based in Cheyenne, Wyoming. As a local news correspondent and an opinion section interviewer for Wyoming Star, Joe has covered a wide range of critical topics, including the Israel-Palestine war, the Russia-Ukraine conflict, the 2024 U.S. presidential election, and the 2025 LA wildfires. Beyond reporting, Joe has conducted in-depth interviews with prominent scholars from top US and international universities, bringing expert perspectives to complex global and domestic issues.