Ethereum Co-founder Stresses "Superintelligent" AI Risks
21 May, 2024 ● Crypto people

Ethereum co-founder Vitalik Buterin has expressed concerns about "superintelligent" artificial intelligence, describing it as "risky" amid recent leadership changes at OpenAI.
Cointelegraph reported on May 19 that OpenAI's former head of alignment, Jan Leike, resigned due to a "breaking point" reached with management over the company's focus.
Leike claimed that OpenAI had prioritized "shiny products" over "safety culture and processes," with a particular emphasis on the development of artificial general intelligence (AGI). AGI, expected to match or exceed human cognitive abilities, has already started to raise alarms among industry experts who believe the world is unprepared to handle such advanced AI systems.
These concerns seem to resonate with Buterin's viewpoint. In a recent post on X, he cautioned against hastily advancing in this field and criticized pushing back against those who are cautious.
Buterin advocated for open models operating on consumer hardware as a protective measure against a potential future dominated by a few large corporations controlling and interpreting most human thoughts.
“Such models are also much lower in terms of doom risk than both corporate megalomania and militaries.”
This remark marks his second in the past week concerning the evolution of AI and its capabilities.
On May 16, Buterin suggested that OpenAI's GPT-4 model had achieved the Turing test, based on new studies showing that most people are unable to discern they are interacting with a machine.
Buterin is not alone in his apprehension. The UK government has also taken a critical stance on the growing influence of Big Tech in AI, emphasizing concerns about market dominance and competition.
Amidst these developments, groups like 6079 are surfacing online, pushing for a decentralized approach to AI to maintain a more democratic control, away from the grips of Big Tech.
This commentary follows another significant exit from OpenAI’s leadership, as co-founder and chief scientist Ilya Sutskever resigned on May 14.
While Sutskever did not voice specific worries about AGI in his resignation, he remained optimistic in a post on X about OpenAI’s future, believing it would develop an AGI that is “safe and beneficial.”
Sources:
https://cointelegraph.com/news/vitalik-buterin-warns-superintelligent-ai-risks-openai-turmoil