Human-Centered AI: Key Questions
Source: weforum.org
Human-centered AI brings up concerns regarding morality, ethics, belief, and spiritual expression. Generative artificial intelligence (AI) is changing many industries, which comes with considerable financial and ecological repercussions. Creating AI that puts people first and improving skills focused on the human element are two ways to pursue AI responsibly. This emphasis is seen in organizations and initiatives such as Stanford University’s Human-centered Artificial Intelligence Institute, Carnegie Mellon University’s Human-Computing Interaction Program, and The Institute for Human Centred Health Innovation.
The World Economic Forum Future of Jobs Report 2025, Stanford’s AI Index Report 2025, and the AI Incident Database also highlight the influence of large language models (LLMs). These reports emphasize the necessity of skills that prioritize people in the workplace, as well as moral principles for a technology that has inherent biases, hallucinations, and hazards. This shift toward skills and AI that focuses on humanity demonstrates a desire for technological advancements to encourage human prosperity alongside environmentally sound business methods and practices. This shift also acknowledges that people have complex relationships with both other people and technology (like chatbots), which creates a vulnerability that could be taken advantage of.
Societies are increasingly aware of the far-reaching consequences of creating and utilizing generative AI. The 2023 EU AI Act, the first comprehensive AI law globally, aims to guarantee that AI systems are secure, understandable, and fair, as well as respectful of privacy and the environment. AI requires our collective awareness in how we use algorithms. According to technology historian Melvin Kranzberg, technology is neither inherently good nor bad, and it is not neutral. This statement suggests that technology has a role in power dynamics, leading to unforeseen and widespread effects on people, society, and the environment. The outcomes differ based on who uses a technology, and how, when, and where they use it. To effectively guide the development, implementation, and application of AI platforms, cooperation among many stakeholders is essential.
The World Economic Forum and the Fourth Industrial Revolution
The World Economic Forum was the first organization to bring attention to the Fourth Industrial Revolution, which is defined by the quickening pace of technological progress. Rules and regulations have not kept up with innovation, which has resulted in a growing demand to close this gap. To ensure that new and emerging technologies benefit humanity, the Forum founded the Centre for the Fourth Industrial Revolution Network in 2017. The network, which has its headquarters in San Francisco, opened locations in China, India, and Japan in 2018 and is quickly growing its network of locally managed Affiliate Centres across numerous nations. The global network collaborates closely with partners in government, business, academia, and civil society to create and implement flexible frameworks for managing new and emerging technologies, including artificial intelligence (AI), autonomous vehicles, blockchain, data policy, digital trade, drones, the internet of things (IoT), precision medicine, and environmental innovations.
Critical Questions for Human-Centered AI
Developing human-centered AI and upskilling employees raise three critical questions:
- Who is the human in human-centered efforts? Is it an executive, a middle manager, an assembly line worker, or someone in a rural area with limited internet? Is it someone waiting for self-driving cars, expecting rain for crops during a drought, or an islander anticipating high tide?
- Who is envisioning the human using AI? Is it a young, male technologist, or a diverse team with a complex understanding of humanity? Women and people of color are underrepresented in AI.
- What protections will be in place for different demographics affected by AI, such as nationality, gender, religion, education, ability, and class?
How we view the human element in human-centered approaches matters. The World Economic Forum Global Gender Gap Report 2025 and Gini-indices confirm vast differences in human experiences. Technological progress often misses the diverse ways humanity shows itself, and the unintended results of this progress.
WEF and AI Guardrails
The Forum’s Centre for the Fourth Industrial Revolution (C4IR) has created the AI Governance Alliance to address concerns about generative AI and the necessity for strong AI governance frameworks. The Alliance brings together leaders from industry, government, academia, and civil society to support transparent and inclusive AI systems. This includes AI Transformation of Industries initiatives, with the Centres for Energy and Materials, Advanced Manufacturing and Supply Chains, Cybersecurity, Nature and Climate, and the Global Industries team.
Disciplines like psychology, sociology, anthropology, and history can help AI developers understand their users. Faith traditions, representing 75.8% of the world’s population (2020), also offer unique perspectives. Many who don’t identify with a religion still practice spirituality and hold religious beliefs. Faith traditions and spirituality offer six key benefits:
- Deep understanding of human and environmental well-being.
- Values like peace, compassion, and justice for ethical living.
- Universal practices in overlooked contexts.
- Narratives that boundaries support well-being.
- Lessons from past mistakes of religious movements.
- Historical wisdom beyond data and algorithms.
These contributions are layered and rich. Documents like the 1934 Barmen Declaration, the 2004 Accra Confession, and the 2009 Charter for Compassion can inform discussions on human-centered AI. Recent reports, such as the interfaith statement towards a New Ethical Multilateralism and the Pontifical Jubilee Report, seek the common good in a complex world. Creative collaboration among diverse stakeholders, including government, business, tech companies, academia, and faith communities, is a best practice.