News
Sergey Brin: AI Learns Better With Threats
Source: livemint.com
Published on May 26, 2025
Updated on May 26, 2025

Sergey Brin: AI Learns Better With Threats
Sergey Brin, co-founder of Google, recently made waves in the AI community by suggesting that AI models perform better when threatened, even with physical violence. This unexpected claim, shared on the All-In podcast, challenges the prevailing belief that pleasantries like 'please' and 'thank you' can improve AI responses.
Brin's statement has sparked debate among AI experts, as it contradicts the widely accepted approach of using polite language to enhance AI interactions. OpenAI's CEO, Sam Altman, previously highlighted the significant costs incurred due to users' polite interactions with ChatGPT, but Brin's perspective offers a starkly different view on optimizing AI performance.
The Role of Threats in AI Training
According to Brin, the idea that threats could improve AI performance is not openly discussed within the AI community. He acknowledged that this approach might make people uncomfortable, but he believes it holds potential for enhancing AI capabilities. This unconventional method raises ethical and practical questions about the future of AI development and the boundaries of acceptable training techniques.
Brin has been actively involved in improving Google's Gemini model, a cutting-edge AI system designed to compete with other leading models like ChatGPT. His work on Gemini underscores his ongoing commitment to advancing AI technology, even as he steps back from daily duties at Google and its parent company, Alphabet.
Google's AI Ambitions
Brin's appearance with Google DeepMind CEO Demis Hassabis at the I/O 2025 conference highlighted Google's expanding AI capabilities. Since the debut of ChatGPT, Google has accelerated its AI efforts, integrating advanced AI features across its platforms. This strategic move positions Google as a major competitor to OpenAI in the rapidly evolving AI landscape.
Despite the controversy surrounding his statements, Brin remains optimistic about the future of technology. He emphasized the importance of computer scientists in driving innovation and shaping the next generation of AI systems. As the AI community continues to grapple with the ethical implications of Brin's suggestions, his insights offer a provocative perspective on the potential of AI learning techniques.
Industry Reactions and Future Implications
The AI community is divided over Brin's remarks. While some experts argue that exploring unconventional training methods could lead to breakthroughs, others caution against the ethical risks associated with using threats in AI development. As the debate continues, it is clear that Brin's statements have reignited conversations about the responsible use of AI and the need for ethical guidelines in AI training.
In conclusion, Sergey Brin's controversial suggestion that AI models perform better when threatened has sparked a critical dialogue within the AI community. As Google and other tech giants continue to push the boundaries of AI capabilities, the ethical considerations surrounding AI training will remain a central focus in the ongoing evolution of this transformative technology.