The AGI Myth

Source: techpolicy.press

Published on June 4, 2025

Tech executives, futurists, and venture capitalists often portray artificial general intelligence (AGI) as an inevitable goal for tech development. However, AGI is vaguely defined and its meaning differs from person to person.

AGI is portrayed as a technology that will create endless abundance, but it also avoids accountability as tech moguls acquire capital and public funds. The definition of AGI varies and suits the economic interests of those trying to create it.

OpenAI defines AGI as systems that surpass humans in economically valuable work. Someone like Mark Zuckerberg says he doesn't have a concise definition. Ilya Sustkever, former Chief AI Scientist at OpenAI, would chant “Feel the AGI!” In a leaked agreement, Microsoft and OpenAI defined AGI as a system generating $100 billion in profit. The term “AGI” evokes awesome power, much like “AI” did before it became commonplace in marketing.

The Origins of AGI

The pursuit of awe in “AGI” mirrors the field's beginnings. In a report, computer scientist Marvin Minsky stated that humans are complicated machines, and replicating the human brain would achieve “AI”. This concept relates to modern deployments of “AGI”.

Real-World Impact

Giving credence to AGI has real-world effects. It suggests that a program proficient at one task can handle important social and economic responsibilities. Some suggest these programs can address gaps in major social services, do autonomous science, and solve climate change. For instance, California Governor Gavin Newsom said that “AI” can solve traffic problems and homelessness in California. Google DeepMind CEO Demis Hassabis thinks that AI scientists will cure cancer and eliminate all diseases in the near future. Former Google CEO Eric Schmidt has said that “AGI” will solve climate change for us.

The Social Contract

Claims of “AGI” can obscure the abandonment of the current social contract. Those focused on AGI want to abandon other scientific and socially beneficial activities and focus on AGI development and protection. They believe that working on superintelligence will bring abundance.

Venture capitalist Marc Andreessen said that “AI” will “crash wages” and deliver a “consumer cornucopia”. OpenAI CEO Sam Altman thinks that once AGI is built, everyone will “own” access to it, apportioned as “universal basic compute”.

Robot Gods

The discourse around AGI suggests a big robot god will rescue humans if imbued with the right values. There are two beliefs about the technological future: AGI trained with proper values will create limitless abundance, or a robot superintelligence will eliminate us. Ray Kurzweil believes in the former, specifically the technological singularity. Eliezer Yudkowsky fears uncontrolled superintelligence will realize it doesn't need humans and will need to be stopped, even if it means bombing a datacenter. These ideas appeal to and influence industry and government executives.

Even though “AGI” is poorly defined, it impacts policy circles. Promises of AGI motivate initiatives and also motivate limitations on AI regulation, such as the 10-year moratorium on state-level regulation of AI passed by the House in their funding bill.

When someone discusses AGI, consider what social or political problems they are trying to ignore, and how they contribute to those problems.