Max Tegmark, an MIT professor and AI researcher, is quite stressed about the potential impact of Artificial General Intelligence (AIG) on human society. In a new test for Timeit’s sounding the alarm, painting a pretty dire picture of an AI-determined future that can outsmart us.
“Unfortunately, I now feel like we’re living the movie ‘Don’t Look Up’ for another existential threat: an unaligned superintelligence,” Tegmark wrote, comparing what he perceives as a apathetic response to a growing AGI threat to director Adam McKay’s popular satire on climate change.
For those who haven’t seen it, “Don’t Look Up” is a fictional story about a team of astronomers who, after discovering that a species-destroying asteroid is hurtling towards Earth, decided to warn the rest of human society. But to their surprise and frustration, much of humanity doesn’t care.
The asteroid is a great metaphor for climate change. But Tegmark thinks the story can apply to AGI risk as well.
“A recent survey showed that half of AI researchers give AI at least a ten percent chance of causing human extinction,” the researcher continued. “Since we have such a long history of thinking about this threat and what to do about it, from science conferences to Hollywood blockbusters, you might expect humanity to kick into high gear with the mission to steer the AI in a safer direction than the outward superintelligence of control.”
“Think again,” he added, “instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comedic it deserves an Oscar.”
In short, according to Tegmark, AGI is a very real threat, and human society is not doing enough to stop it – or, at the very least, not guaranteeing that AGI will be properly aligned with human values and Security.
And just like in McKay’s film, humanity has two choices: start taking serious action to counter the threat – or, if things go the way of the film, watch our species perish.
Tegmark’s claim is quite provocative, especially since many experts out there either disagree that the AGI will materialize one day, or assert that it will take a very long time, if ever. Tegmark addresses this disconnect in his essay, though his argument is arguably not the most compelling.
“I’m often told that AGI and superintelligence won’t happen because it’s impossible: human-level intelligence is something mysterious that can only exist in the brain,” Tegmark writes. “Such carbon chauvinism ignores a fundamental idea of the AI revolution: that intelligence is about processing information, and it doesn’t matter whether the information is processed by carbon atoms in the brain or by silicon atoms in computers.”
Tegmark goes so far as to claim that superintelligence “is not a long-term problem”, but is even “more short-term than, say, climate change and most people’s retirement planning”. To support his theory, the researcher pointed to a recent Microsoft study claiming that OpenAI’s large GPT-4 language model is already showing “sparks” of AGI and a recent talk given by deep learning researcher Yoshua Bengio.
While Microsoft’s study isn’t peer-reviewed and arguably looks more like marketing material, Bengio’s warning is far more compelling. His call to action is much more based on what we don’t know about machine learning programs that Already existsinstead of making big claims about technology that doesn’t yet exist.
To that end, the current crop of less-sophisticated AIs already poses a threat, from synthetic content spreading disinformation to the threat of AI-powered weaponry.
And the industry as a whole, as Tegmark further notes, hasn’t done an incredible job so far of ensuring slow and safe development, arguing that we shouldn’t have taught it to code, to connect to the internet or give it a public API.
Ultimately, it is still unclear if and when the AGI could materialize.
While there is certainly a financial incentive for the field to continue to evolve rapidly, many experts agree that we should slow the development of more advanced AIs, AGI is right around the corner. or even light years away.
And in the meantime, Tegmark argues that we should agree that there is a very real threat ahead of us before it’s too late.
“Although humanity is heading towards a cliff, we’re not there yet, and there’s still time for us to slow down, change course and avoid falling – and instead reap the benefits amazing things that safe and aligned AI has to offer,” Tegmark writes. “It requires agreeing that the cliff actually exists and its fall does not benefit anyone.”
“Just look!” he added.
Learn more about AI: Elon Musk says he’s building ‘maximum truth-seeking AI’
#MIT #professor #compares #AGIs #ignorance #dont