News

Human-Centered AI: Key Questions

Source: weforum.org

Published on September 29, 2025

Updated on September 29, 2025

A conceptual image representing human-centered AI, with a person interacting with AI technology while surrounded by symbols of ethics, society, and sustainability.

Human-Centered AI: Key Questions

Human-centered AI is emerging as a critical focus in the rapidly evolving landscape of generative artificial intelligence. As AI technologies reshape industries, they bring significant financial and ecological implications, raising essential questions about morality, ethics, and the role of technology in society. Organizations like Stanford University’s Human-centered Artificial Intelligence Institute and Carnegie Mellon University’s Human-Computing Interaction Program are leading efforts to develop AI systems that prioritize human needs and values.

Reports such as the World Economic Forum’s Future of Jobs Report 2025 and Stanford’s AI Index Report 2025 underscore the importance of human-centered skills in the workplace. These reports highlight the need for ethical frameworks to address the inherent biases, hallucinations, and risks associated with large language models (LLMs). The shift towards human-centered AI reflects a growing recognition of the need for technological advancements to support human prosperity while promoting environmentally sustainable practices.

The 2023 EU AI Act, the first comprehensive AI law globally, emphasizes the importance of creating AI systems that are secure, transparent, fair, and respectful of privacy and environmental concerns. This legislation underscores the collective responsibility in managing AI algorithms, as technology historian Melvin Kranzberg noted, "Technology is neither good nor bad; nor is it neutral." This statement highlights the complex power dynamics and unintended consequences of technology on society and the environment.

The World Economic Forum and the Fourth Industrial Revolution

The World Economic Forum has been instrumental in highlighting the Fourth Industrial Revolution, characterized by the accelerating pace of technological innovation. However, regulatory frameworks have struggled to keep up with this rapid progress, creating a growing need for governance that ensures new technologies benefit humanity. In response, the Forum established the Centre for the Fourth Industrial Revolution Network in 2017, with headquarters in San Francisco and expanding to locations in China, India, Japan, and beyond.

The network collaborates with government, business, academia, and civil society to develop flexible frameworks for managing emerging technologies, including AI, autonomous vehicles, blockchain, and environmental innovations. This collaborative approach is essential for guiding the development, implementation, and application of AI platforms in a way that aligns with human values and societal needs.

Critical Questions for Human-Centered AI

The development of human-centered AI raises three critical questions:

  1. Who is the human in human-centered efforts? Is it an executive, a middle manager, an assembly line worker, or someone in a rural area with limited internet access?
  2. Who is envisioning the human using AI? Is it a young, male technologist, or a diverse team with a nuanced understanding of humanity?
  3. What protections will be in place for different demographics affected by AI, such as nationality, gender, religion, education, ability, and class?

These questions highlight the importance of considering the diverse experiences and needs of people when developing AI systems. Reports such as the World Economic Forum’s Global Gender Gap Report 2025 and Gini indices confirm the vast differences in human experiences, underscoring the need for inclusive and equitable AI development.

WEF and AI Guardrails

The Forum’s Centre for the Fourth Industrial Revolution (C4IR) has established the AI Governance Alliance to address concerns about generative AI and the need for robust governance frameworks. The Alliance brings together leaders from industry, government, academia, and civil society to support transparent and inclusive AI systems. Initiatives such as the AI Transformation of Industries and Centres for Energy and Materials, Advanced Manufacturing, and Cybersecurity are part of this effort.

Disciplines like psychology, sociology, anthropology, and history can provide valuable insights for AI developers to better understand their users. Faith traditions, which represent 75.8% of the world’s population, also offer unique perspectives on human and environmental well-being. These contributions can inform the development of AI systems that align with ethical principles and support the common good.

In conclusion, human-centered AI requires a collective effort to ensure that technological advancements benefit humanity while addressing the complex challenges of ethics, diversity, and sustainability. By fostering collaboration among diverse stakeholders, including faith communities, governments, and businesses, we can create AI systems that truly serve the needs of people and the planet.