The realm of artificial intelligence (AI) is a double-edged sword—potentially perilous yet capable of mitigating existential threats. Within the tech community, discussions about AI safety have intensified, contributing to the departure of OpenAI’s co-founder, Sam Altman, according to Bloomberg News.
Central to these deliberations is a profoundly daunting question: Could AI pose an existential risk, threatening humanity’s survival? Let me allay some fears: AI, in essence, carries risks akin to other threats humanity faces—such as supervolcanoes, asteroids, or nuclear warfare.
Perhaps this isn’t the most comforting reassurance. However, it contrasts sharply with the outlook of AI researcher Eliezer Yudkowsky, who foresees humanity’s demise. Yudkowsky envisions a future where AI surpasses human intellect, diverging from our objectives, ultimately rendering us akin to the Neanderthals. Some advocate for a six-month hiatus in AI progress to gain a deeper understanding of its implications.
AI represents the latest chapter in humanity’s encounter with technological challenges. The printing press and electricity brought both benefits and misuses. Yet halting or impeding progress in these domains would have been a misstep. Similarly, AI deserves a cautious yet progressive approach.
The debate also hinges on whether to address issues probabilistically or marginally. Critics often ask, “What’s the probability of catastrophe?” A more constructive query might be, “Given AI’s inevitable progression, how can we improve it?” The logical step is to enhance its safety and efficacy while mitigating risks.
Predicting AI’s existential risk, or any other, in a theoretical vacuum remains arduous. Progress is likelier when we contextualize these concerns in real-world scenarios.
It’s crucial to note that pessimistic arguments lack robust support from extensive peer-reviewed research, unlike, say, climate change discussions. Stopping a major technology based on scant confirmed research resembles pseudo-science. Additionally, the perceived risk of catastrophe doesn’t reflect in market prices; risk premiums remain moderate, and economic variables seem stable. Those convinced of AI’s world-ending potential might profitably bet on market volatility or donate to alleviate human suffering, but few seasoned traders opt for such strategies.
When queried about portfolio adjustments aligning with their beliefs, AI pessimists usually admit to none. Sensibly, they refrain from probabilistic recalibrations in their life decisions. The most effective ones are channeling efforts into enhancing AI safety—an endeavor worthy of encouragement.