Do Tech Billionaires Know Something We Don't About the Future?

Source: bbc.com

Published on October 10, 2025 at 10:06 AM

Tech moguls are investing heavily in doomsday prepping, raising questions about what they anticipate. Are their bunkers and land purchases a sign of potential global threats that the rest of us should be aware of?

Zuckerberg's Hawaiian Compound

Mark Zuckerberg's Koolau Ranch in Kauai, Hawaii, includes a shelter with its own energy and food. Workers are under strict NDAs, fueling speculation despite Zuckerberg calling it 'just like a little shelter.'

He also bought 11 properties in Palo Alto, California, adding underground spaces. Neighbors have nicknamed them 'bunkers' or a 'billionaire's bat cave'.

Apocalypse Insurance

LinkedIn co-founder Reid Hoffman mentioned 'apocalypse insurance,' with New Zealand as a prime location. This trend sparks concerns about preparations for war, climate change, or other catastrophes.

AI's Existential Threat

AI advancements have heightened these fears. Ilya Sutskever, a co-founder of OpenAI, suggested building a bunker for top scientists before releasing advanced AI, showing concerns about its potential impact.

OpenAI boss Sam Altman predicts AGI is coming “sooner than most people think”. DeepMind's co-founder, Demis Hassabis, anticipates it in five to ten years. Anthropic founder Dario Amodei thinks 'powerful AI' could arrive as early as 2026.

Doubts About AGI

Dame Wendy Hall from Southampton University says current AI is not near human intelligence. Babak Hodjat from Cognizant agrees, stating 'fundamental breakthroughs' are needed. The race to develop AI is ongoing globally.

The Singularity Concept

The idea of a 'singularity,' when AI surpasses human understanding, has gained traction. Eric Schmidt, Craig Mundy, and Henry Kissinger explore AI's decision-making dominance in their book 'Genesis'.

Benefits and Risks of AGI

AGI proponents envision cures for diseases, solutions to climate change, and clean energy. Elon Musk believes super-intelligent AI could bring 'universal high income'. However, there are fears of AI being weaponized or turning against humanity.

Government Safeguards

Governments are taking precautions. President Biden passed an order for AI firms to share safety test results. The UK's AI Safety Institute studies the risks of advanced AI.

The Human Flaw

A former bodyguard of a billionaire with a bunker joked about prioritizing his own safety over his boss's in a crisis. This highlights the human element in doomsday prepping.

AI as a Distraction

Cambridge University's Neil Lawrence views AGI as an 'absurd' concept, like an 'Artificial General Vehicle'. He says focus should be on current AI's impact.

Limitations of Current AI

Current AI excels at pattern recognition but lacks 'feeling'. Large Language Models (LLMs) mimic memory but are inferior to human memory. Vince Lynch, CEO of IV.AI, calls AGI declarations 'great marketing'.

Human Brain Still Dominates

Despite AI's advancements, the human brain has more neurons and synapses. It adapts instantly and possesses meta-cognition, a form of consciousness absent in LLMs.