Analytics Culture Economy Opinion Politics Wyoming

EXCLUSIVE: Ethical, Economic, and Political Dimensions of Artificial Intelligence

EXCLUSIVE: Ethical, Economic, and Political Dimensions of Artificial Intelligence
Andy / Getty Images
  • PublishedMay 31, 2025

As artificial intelligence (AI) advances at breakneck speed, it is not only transforming industries—it is reshaping the social contract between governments, businesses, and citizens.

While AI promises significant gains in productivity, efficiency, and innovation, it also poses urgent questions about ethics, equity, and governance. To better understand this technology, Wyoming Star dove deeper into the world of AI. A growing chorus of experts, institutions, and policymakers is emphasizing the need for AI development that is not only technologically advanced but also equitable, transparent, and aligned with the public good.

AI is rapidly reconfiguring the global economy. From streamlining supply chains to enabling predictive maintenance and optimizing healthcare diagnostics, automation is enhancing productivity in manufacturing, logistics, agriculture, and beyond. According to the National Association of Manufacturers (NAM), AI’s deployment can alleviate skilled labor shortages and bolster operational efficiency.

sorbetto / Getty Images

However, these benefits are not evenly distributed. As AI automates increasingly complex tasks—including those once considered immune to digital disruption—concerns around job displacement, economic inequality, and workforce preparedness intensify.

“We are seeing real productivity boosts,” NAM notes, “but unless we pair that with workforce investment, the gains could come at the expense of working families.”

These disparities have drawn the attention of policymakers. Legislative hearings in Wyoming are exploring how AI may reshape public-sector employment and governmental accountability. Proposals such as expanding “right to repair” laws to include algorithms and digital systems aim to prevent monopolistic control of critical technologies and promote greater transparency.

At the core of the AI debate lies a profound ethical challenge: Can we ensure that AI systems make decisions consistent with human values?

UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence provides an important global framework, emphasizing human rights, non-discrimination, and the need for human oversight. Yet, implementation across countries and sectors remains inconsistent.

Dr. Cynthia Rudin, a Computer Science Professor at Duke University, underscores the importance of transparency in high-stakes domains like healthcare and criminal justice. In a statement to the Wyoming Star, she cautions:

AI can have a positive impact … but it really needs to be implemented carefully. For instance, for a lot of applications, it is essential that predictive models in those fields are interpretable, meaning that a person can understand exactly the formula that is used to make decisions, and what data is involved in them; in other words, these cannot be black box AI tools that do not explain their decisions faithfully. There are also issues with data access — right now, it is hard for academics and smaller companies to access medical data to build datasets to test and build models. A lot of companies claim their models are really accurate – should we just believe them? There have been so many disasters with these models, so it would be better if datasets were available to validate those models… It is also important that AI is used for good and not just to exacerbate inequalities. There was a situation a while ago where an insurance company used healthcare costs to project healthcare needs, but that wasn’t a good idea because healthcare costs reflected the wealth of the patient and not their needs… Just being able to program a computer does not mean that one knows how to handle data to make causal conclusions! So there really is a lot to do, but we need to be careful how we do it.

Effective AI oversight requires both access to high-quality data and robust accountability mechanisms. Yet researchers outside major corporations often struggle to validate AI models, further eroding public trust and contributing to regulatory hesitation.

OpenAI CEO Sam Altman during a news conference with Donald Trump in the Roosevelt Room of the White House on January 21 in Washington, DC (Andrew Harnik / Getty Images)

Government agencies in the US—including the Department of Commerce and Department of Homeland Security—are exploring how generative AI can improve public services while safeguarding privacy and national security. However, the path forward remains contested.

The Mountain States Policy Center, for example, has called for a temporary moratorium on AI regulation, warning that premature or overbearing rules could stifle innovation. Others, including Harvard Business Review, caution that uncritical adoption—especially in sectors like healthcare—may lead to false promises unless paired with institutional reform and ethical scrutiny.

On the international front, companies such as OpenAI are working with governments to shape policy frameworks that are culturally sensitive and innovation-friendly. OpenAI’s Global Affairs initiative advocates for inclusive policymaking that balances rapid technological progress with the public interest.

While AI excels at analyzing data, it lacks the capacity for moral reasoning. Dr. Jeffrey Alan Lockwood, Professor of Natural Sciences and Humanities at the University of Wyoming, notes that:

As with any technology, it is not inherently good or evil. Rather, the ethical judgment depends on the context, intentions, and outcomes of its use.

He frames the ethical use of AI through three classical lenses:

According to utilitarianism, the use of AI would be ethical if the consequences are good—meaning the greatest good for the greatest number.  Of course, what constitutes the “good” is crucial, but at the very least it would seem to entail an overall reduction in human suffering or increase in human happiness.

According to deontology, the use of AI would be ethical if doing so accorded with our duties—in particular the rational obligation to not use others as a means to our ends (treating people with dignity or, in a very oversimplified version, to do unto others as you would have done unto you via the Golden Rule).

According to virtue ethics, the use of AI would be ethical if doing so accorded with the virtues, which variously include courage, prudence, justice, fortitude, and temperance (all of which include an element of moderation in our behavior).

Lockwood emphasizes that AI, like the printing press or the internet, is a tool—it can help or harm depending on its application:

I’ve not seen any version, formulation or conceptualization of this technology that would be capable of making ethical judgments.  So, even if we use AI to produce some solution, information, or strategy, it does not follow that we ought to employ these results. That consideration is, as far as I can tell, far beyond the ability of AI itself and seems likely to remain on the shoulders of humanity.

Geographic equity is another dimension of responsible AI development. Lars Kotthoff, Computer Science Associate Professor at the University of Wyoming, points to Wyoming’s unique potential:

Modern AI is very resource intensive, and Wyoming, as an energy state, has the resources to enable it. In addition, the University of Wyoming increasingly focuses on developing and deploying AI systems, and educating the next generation of AI experts. It is important to ensure that rural populations, such as in Wyoming, are not left behind by current AI developments.

Dr. Gabrielle Allen, Director of the School of Computing at the University of Wyoming, highlights AI’s power in underrepresented sectors:

While generative AI dominates headlines, some of the most impactful—and less discussed—AI developments are happening in sectors like agriculture, logistics, tourism, education, and rural healthcare. At the University of Wyoming’s School of Computing, we’re leveraging sensors, drones, and satellite imagery to gather high-quality data and develop AI tools that generate actionable insights, automate complex tasks, and support smarter decision-making in these critical industries. Alongside this research, we’re also focused on preparing the next generation of AI-literate professionals with hands-on experience that equips them to confront the societal issues AI is helping to reshape.

Tony Webster / Wikimedia

AI is not merely a tool—it is a transformative force shaping how societies function, how economies grow, and how justice is administered. The ethical, economic, and political challenges it raises cannot be addressed by technologists alone. They demand interdisciplinary collaboration and global dialogue.

As we navigate this new frontier, the fundamental question remains: not just what AI can do, but what it should do—and for whom. Building an inclusive and accountable AI future means embedding human dignity into the code that powers tomorrow. That responsibility lies not with machines, but with us.

Learn more about the political side of AI development here.

Joe Yans

Joe Yans is a 25-year-old journalist and interviewer based in Cheyenne, Wyoming. As a local news correspondent and an opinion section interviewer for Wyoming Star, Joe has covered a wide range of critical topics, including the Israel-Palestine war, the Russia-Ukraine conflict, the 2024 U.S. presidential election, and the 2025 LA wildfires. Beyond reporting, Joe has conducted in-depth interviews with prominent scholars from top US and international universities, bringing expert perspectives to complex global and domestic issues.