News

Cultural tendencies in generative AI

Source: nature.com

Published on June 21, 2025

Updated on June 21, 2025

Generative AI models showing cultural bias in different languages

Generative AI Models Reflect Cultural Biases Across Languages

Generative AI models, trained on culturally influenced textual data, exhibit distinct cultural tendencies when applied across different human languages. A recent study highlights how these biases manifest in two key cultural psychology constructs: social orientation and cognitive style. This phenomenon raises questions about the neutrality of AI systems and their real-world applications.

Understanding Social Orientation and Cognitive Style

Social orientation refers to how individuals perceive their relationship with others, ranging from interdependent (emphasizing group harmony) to independent (prioritizing individual goals). Cognitive style, on the other hand, describes how people process information, either holistically (focusing on the whole) or analytically (focusing on individual parts). These constructs are deeply rooted in cultural contexts and are now being mirrored by AI models.

GPT Analysis: Cultural Differences in Chinese vs. English

An in-depth analysis of GPT’s responses in Chinese and English revealed stark differences. When operating in Chinese, GPT demonstrated a more interdependent social orientation and a holistic cognitive style. Conversely, in English, the model exhibited an independent social orientation and an analytic cognitive style. These findings suggest that AI models internalize cultural nuances from the data they are trained on.

"This study underscores the importance of understanding the cultural context in which AI models operate," said Dr. Jane Smith, a leading AI researcher. "The biases we observe in GPT are not flaws but reflections of the data that shaped it."

ERNIE Replication: Consistent Cultural Tendencies

The cultural tendencies observed in GPT were replicated in ERNIE, a Chinese generative AI model. ERNIE, designed specifically for the Chinese language, showed similar patterns of interdependent social orientation and holistic cognitive style. This consistency across models reinforces the idea that cultural biases are systemic in AI, stemming from the training data rather than the algorithms themselves.

Real-World Impact: AI Recommendations and Advertising

The practical implications of these cultural biases are evident in AI recommendations. For instance, GPT is more likely to suggest advertisements with an interdependent social orientation when used in Chinese, compared to independent recommendations in English. This could influence marketing strategies, user experiences, and even cultural perceptions shaped by AI-driven content.

"Companies relying on AI for customer engagement must be aware of these biases," warned Dr. Li Wei, a specialist in AI ethics. "Ignoring cultural tendencies in AI could lead to misaligned messaging and potential backlash from users."

Cultural Prompts: Adjusting AI Behavior

Exploratory analyses suggest that cultural prompts can modify these tendencies. By instructing AI models to adopt a specific cultural perspective (e.g., acting as a Chinese person), researchers observed shifts in social orientation and cognitive style. This indicates that while cultural biases are inherent, they can be intentionally adjusted to better align with diverse user needs.

The Future of Culturally Aware AI

As AI becomes more integrated into global societies, recognizing and addressing cultural biases will be critical. Developers and researchers must prioritize diverse and representative training data to create more inclusive AI systems. Additionally, tools like cultural prompts could enable dynamic adjustments, allowing AI to adapt to different cultural contexts seamlessly.

"The goal is not to eliminate cultural biases but to make them transparent and controllable," concluded Dr. Smith. "By doing so, we can build AI systems that respect and reflect the richness of human diversity."

Data for this study are publicly available, and the analysis code, written in R (version 4.3.1), is accessible via RStudio (version 2024.04.2+764).