New Delhi: The chief AI scientist at Meta, Yann LeCun, holds a differing opinion from Elon Musk regarding the existential threat posed by artificial intelligence (AI). LeCun, an experienced computer scientist specialising in AI and machine learning, is an optimist who believes that AI has the potential to benefit the world greatly and does not consider it inherently dangerous.
In a recent podcast with venture capitalist Harry Stebbings, LeCun addressed Musk’s concerns about AI, particularly in light of the release of OpenAI’s ChatGPT. Musk has repeatedly expressed his worries about the dangers of AI, including in an interview with Tucker Carlson in April, where he stated that AI had the potential for “civilisation destruction.”
LeCun, however, disagrees with Musk’s viewpoint. He dismisses it as “completely false” and suggests that Musk might have been influenced by Nick Bostrom’s book “Superintelligence” or the writings of Eliezer Yudkowsky. Both Bostrom and Yudkowsky are notable figures in the field of AI and have contributed extensively to discussions about its risks and ethics.
Nick Bostrom, a Swedish philosopher and professor at the University of Oxford, is known for his work on existential risks, ethical dilemmas posed by AI, and the consequences of futuristic technologies on human civilisation. He is the author of “Superintelligence: Paths, Dangers, Strategies,” which delves into the potential implications of advanced AI.
Eliezer Yudkowsky, an American AI researcher and writer, is recognised for his contributions to the field of artificial intelligence alignment. He co-founded the Machine Intelligence Research Institute (MIRI) and has written extensively about artificial general intelligence (AGI), rationality, decision theory, and the future of humanity. His online writings, including the rationality blog “LessWrong,” have gained significant attention.
LeCun explains the flaw in Musk’s theory, particularly the concept of “hard take-off.” This theory suggests that once a super-intelligent AI system is activated, it will rapidly improve itself beyond human intelligence, potentially leading to the destruction of the world.
LeCun argues that this assumption is baseless, as exponential growth cannot be sustained indefinitely in the real world. He emphasises that such AI systems would require access to vast resources and unlimited power and agency, which is highly unlikely. Additionally, LeCun rejects the notion that AI systems, even if more intelligent than humans, would inherently desire to control or dominate humans. He draws a parallel with intelligent humans, noting that intelligence alone does not drive individuals to seek domination over others.
LeCun expresses his intention to discuss this issue with Geoffrey Hinton, often referred to as the “Godfather of AI,” who recently left Google, citing concerns similar to those raised by Musk. LeCun reveals that he and Hinton have not yet talked about their differing opinions and plan to exchange their viewpoints. He speculates that Hinton might not be aware of his stance as he does not follow LeCun’s Twitter posts.
Regarding Hinton’s departure from Google, LeCun understands his motivations. He believes that AI’s complexity and rapid evolution necessitates individuals who can freely express their opinions. LeCun disagrees with Hinton’s concern about the probability of human extinction due to AI and maintains his optimistic outlook on the matter.
In summary, Yann LeCun, the chief AI scientist at Meta, challenges Elon Musk’s belief that AI poses an existential threat. LeCun dismisses the notion of AI leading to civilisation destruction and disagrees with the assumption of a “hard take-off.” He highlights the works of Nick Bostrom and Eliezer Yudkowsky as potential influences behind Musk’s concerns.
LeCun also expresses his intention to discuss the matter with Geoffrey Hinton, who shares similar worries, but he disagrees with Hinton’s stance on the probability of human extinction caused by AI.