Meet the Man Who First Conceptualized AGI
Source: wired.com
What is AGI?
Artificial General Intelligence, or AGI, is a term that has gained prominence in recent years. It refers to the stage at which computers can match or surpass human intelligence. AGI has been making headlines, with deals between major tech companies like OpenAI and Microsoft hinging on its achievement. It has also led to massive investments from companies like Meta, Google, and Microsoft, and has contributed to Nvidia's rise to a $5 trillion company. Some US politicians believe that if we don't achieve AGI before China does, we're in trouble. Experts predict that AGI could be achieved within this decade, and it could revolutionize everything.
Meet Mark Gubrud
Mark Gubrud, a man who is largely unknown to the public, is the person who first came up with the term AGI. In 1997, Gubrud was a graduate student obsessed with nanotechnology and its potential dangers. He attended nanotech conferences and was particularly concerned about how cutting-edge science, including advanced AI, could be used to develop dangerous weapons of war.
Gubrud's concerns about the potential misuse of advanced technologies like AI are still relevant today. As we continue to make progress in AI and other fields, we must also consider the ethical implications and potential risks.
In his paper, Gubrud defined AGI as 'AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.'
This definition is still largely accurate and is used by many in the field today. However, Gubrud's paper wasn't widely circulated and its impact was minimal at the time. It wasn't until the early 2000s that the term AGI started to gain traction, thanks in part to the work of researchers like Ben Goertzel and Shane Legg.
The Role of Shane Legg
In the early 2000s, researchers like Ben Goertzel and Shane Legg were working on a book about AI. They were looking for a term to describe AI that could be used for wide-ranging applications, as opposed to specific and bounded domains like playing chess or medical diagnoses.
Legg suggested adding the word 'general' to AI, coining the term 'artificial general intelligence' or AGI. Goertzel and the other contributors liked the term and it started to gain traction in the AI community.
Interestingly, Gubrud only became aware that others were using the term AGI in the mid-2000s. He reached out to those popularizing the term and it turned out that he had indeed come up with the term first, albeit in a paper that wasn't widely read.
Implications of AGI
The pursuit of AGI is driving major investments and deals in the tech industry. For example, a recent deal between OpenAI and Microsoft hinged on what happens if OpenAI achieves AGI. Companies like Meta, Google, and Microsoft are also making massive capital expenditures to pursue AGI.
Some US politicians believe that if we don't achieve AGI before China does, we're in trouble. This is because AGI has the potential to revolutionize industries and give countries a significant strategic advantage.
However, we must also consider the potential risks and ethical implications of AGI. As Gubrud warned in his paper, AGI could be used to develop dangerous weapons of war. We must ensure that we have proper safeguards and regulations in place to prevent such misuse.