AI: Ethics and Global Values

Source: techpolicy.press

Published on June 9, 2025

AI and the World We Want

Artificial intelligence should help us build the world we want, instead of allowing a powerful few to build it for us. Often, that “few” are wealthy, primarily white, primarily male billionaires who view technology as a tool for profit and power, not dignity.

Alternative concepts offer different ways to consider technology and how to create a more just, equitable, and sustainable future. Technologists and policymakers must consider the wisdom within ancient concepts, looking beyond narrow moral and ethical frameworks.

Having spent much of my life in Asia, I’ve seen how Western innovations can overshadow local cultures, languages, and values. I worked with Malaysia’s National AI Office on AI governance and ethics. I was asked to explore how a principle called kesejahteraan—prosperity or holistic well-being—could guide AI governance.

Values-Based Frameworks

Drawn from Malaysia’s MADANI framework, kesejahteraan is a civic value rooted in the country’s plural traditions. It defines well-being as a national goal of human flourishing, grounded in compassion, justice, equity, and human dignity, not as a byproduct of economic growth. While not narrowly religious, kesejahteraan plays a role similar to values-based social frameworks that embed ethical purpose into governance, like “the common good” in Catholic social teaching or the Swedish political doctrine of folkhemmet, meaning “the people’s home.”

The concept of ubuntu in Sub-Saharan Africa affirms deep social interconnectedness, where individual flourishing is inseparable from community well-being. In Aotearoa New Zealand, Māori principles like kaitiakitanga (guardianship), whakapapa (relationality), whanaungatanga (social obligations), kotahitanga (collective benefit), and manaakitanga (reciprocity) have shaped national approaches to data governance and technology policy. Like kesejahteraan, these frameworks offer a moral compass that insists technology, policy, and progress must serve a deeper human purpose.

I decided to confront the assumption that has dominated tech policy discourse: the need to “balance innovation and regulation.” There is no balancing act when the stakes are this high. When it comes to AI and other frontier technologies, if it doesn’t serve human flourishing, it isn’t innovation worth pursuing. Values embedded in our societies tell us this is true. This moral clarity is frequently missing from corporate roadmaps and multilateral declarations. Some governments include these values in their national strategies, but they often become secondary in the “innovation or regulation” balancing act. This moral clarity is urgently needed.

The moral lighthouse is powered by indigenous, religious, and humanistic values. We must ensure they drive AI governance and policymaking. Our ancestors tell us this is possible.

The Need for Ethical AI

Artificial intelligence is rapidly reshaping economies, democracies, workplaces, and personal and political life. Billionaire tech leaders say innovation must be rapid, markets must be free, and regulation must be minimal so the world can benefit. This is a dangerous illusion.

I once believed in the democratizing power of technology, working to promote freedom of expression, transparency, and democratic governance, believing social media could give voice to the voiceless. But social media was a test case, monetizing outrage, polarization, and misinformation. The consequences for mental health, civic trust, democracy, and societal cohesion are now apparent.

Markets alone do not optimize for inclusion, equity, dignity, or human flourishing. They optimize for profit, scale, addiction, and efficiency, often harming people. AI threatens to repeat and magnify these failures. If left unchecked, unsupervised, and unregulated, the damage to people, systems of government, and cultural life will be profound. We need a moral lighthouse—an ethical compass that guides our policymaking before harm is done. This is not a call for religious doctrine to dictate public policy, nor is it equating morality with religion. Instead, it is a call for our shared humanity to ask not only can we build a technology, but should we? And if we should, how can it help us shape the world we want?

Shaping Technology for Good

Technological progress is not inevitable. Government policy can encourage a more beneficial trajectory for AI. We can bend the arc of innovation, like labor protections in the Industrial Age and environmental regulations in the age of fossil fuels. Society stepped in to shape technology for the common good, not for exploitation and power accumulation.

When it comes to AI, we risk relinquishing control. The dominant paradigm espoused by tech companies is grounded in market fundamentalism: innovate quickly, scale rapidly, and worry about the consequences later. Harms are treated as bugs, not warnings, cleaned up only if there’s enough public backlash. Pressure on startups to scale, monetize, and get acquired overrides ethical reflection. The result is AI systems that are biased, opaque, and harmful, deployed across hiring, policing, criminal sentencing, education, welfare, migration, and healthcare. These tools lack oversight, accountability, transparency, and redress for those impacted.

This happens when the profit motive is prioritized above moral obligations and policymakers fail to properly lead.

The Moral Vacuum in AI Governance

The lack of moral clarity in US AI governance is unfolding for the world to see. The consolidation of sensitive personal data from multiple US government departments aims to create a surveillance tool. AI is being used to interfere in government processes and monitor federal employees for disloyalty, prioritizing surveillance and weakening institutions over rights, dignity, and due process.

An executive order removed safeguards on AI development, prioritizing rapid AI advancement over ethical considerations and public safety. Money over humanity, power over accountability, acceleration over wisdom. Congress has failed to act in meaningful ways or restrain corporations from moving fast and breaking things with AI. Proposals exist to ban states and localities from enacting AI-related regulations, greenlighting corporate entrenchment of AI systems across critical sectors. Big Tech’s influence over the regulatory narrative weakens oversight, delaying guardrails that could protect the public.

This reveals why societies need a moral lighthouse—a guiding philosophy rooted in local wisdom and community values that insists innovation must serve human flourishing, not speed, dominance, or profit.

Malaysia’s Prime Minister Anwar Ibrahim stated that the failure of a global political system is a deficit in value, where people don’t honor human dignity or care about justice or fairness. Good governance defines what progress is worth pursuing. US tech leaders and policymakers have lost the high ground. Policymakers in the Global Majority should listen to those from within. The diverse cultures and religious traditions of the Global Majority bring practical moral guidance to the policymaking table.

An ethical framework for AI policy begins by reaffirming basic principles that have formed the foundation of many indigenous traditions: that human dignity is not negotiable, that power must be accountable, and that no innovation is above public scrutiny. It requires policy not just as a safety net, but as a steering wheel.

Indigenous traditions in North America, such as the Haudenosaunee Confederacy’s Seventh Generation Principle, call on leaders to consider how every decision will affect those who come seven generations after them. These worldviews emphasize relationality, reciprocity, and stewardship, reminding us that data, knowledge, and innovation are not just resources to be mined but responsibilities to be held with care.

Some countries are trying to center indigenous wisdom in their governance systems for frontier technology. Aotearoa New Zealand has embedded Māori principles of guardianship and care into its data ethics strategy, showing that indigenous values can enrich modern governance. In Malaysia, kesejahteraan offers a moral lens through which AI can be assessed, not only in terms of efficiency or GDP, but also in its contribution to justice, dignity, and shared humanity. In Europe, France’s États généraux de l’information draws on a traditional participatory model where citizens, journalists, and civil society help shape digital governance.

These examples demonstrate that alternatives to the capitalist, market-first model underpinning today's governance discourse are possible. Most declarations on AI governance gesture toward fairness, accountability, transparency, and privacy. These offer procedural guardrails, not purpose. What’s missing is moral clarity—a sense of why we are building these technologies and for whom. We need something more enduring than a risk framework: a moral lighthouse that draws on the ethical inheritance of our cultures and communities.

These are the pathways our ancestors laid down—through teachings about responsibility, stewardship, justice, and care—not just for the present, but for the generations to come. The question is whether we have the political will to act on that moral compass when the profit motive collides with the public interest. Traditions from across the Global Majority tell us that as societies we can, and our leaders must, push back when our flourishing is compromised. This kind of moral lighthouse gives our leaders the clarity to ask: Does this technology contribute to human flourishing? If not, why not?

The stakes are enormous. AI systems and the people that deploy them will influence who gets hired, promoted, or fired, who receives what medical care, how children are taught, how and which people can migrate, who gets mortgages and bank loans, and how governments allocate resources. It will shape the narratives we hear and believe, the choices we see, and the freedoms we enjoy. If AI deepens inequality, disempowers people, or displaces civic participation, it is not the future we want—no matter how advanced the technology may be, or how much money some individuals can make from it.

A moral lighthouse doesn’t guarantee safe passage, but powered by common moral values, it helps us chart a course and navigate uncertainty. It warns us when the rocks are near. In an age of market acceleration and ethical drift, we need that beacon more than ever. Ultimately, the question is not what technology can do. The question is: what kind of world do we want to build?