Long before Elon Musk and Apple co-founder Steve Wozniak signed a letter warning that artificial intelligence poses “profound risks” to humanity, British theoretical physicist Stephen Hawking had sounded the bell. alarm about the rapid evolution of technology.
“The development of full artificial intelligence could mean the end of the human race,” Hawking told the BBC in a 2014 interview.
Hawking, who suffered from amyotrophic lateral sclerosis (ALS) for more than 55 years, died in 2018 at the age of 76. Although he had critical remarks about the AI, he also used a very basic form of the technology to communicate due to his illness, which weakened muscles and required Hawking to use a wheelchair.
Hawking was unable to speak in 1985 and relied on various means of communication, including a speech-generating device operated by Intel, which allowed him to use facial movements to select words or letters that were synthesized into speech.
Hawking’s comment to the BBC in 2014 that AI could “mean the end of the human race” was in response to a question about the possibility of revamping the voice technology he relied on. He told the BBC that very basic forms of AI have already proven powerful, but creating systems that rival or surpass human intelligence could be disastrous for the human race.
“It would take off on its own and redesign itself at an increasingly rapid rate,” he said.
“Humans, who are limited by slow biological evolution, would not be able to compete and would be replaced,” Hawking added.
Months after his death, Hawking’s latest book hit the market. Titled “Brief Answers to Big Questions,” his book provided readers with answers to questions he was frequently asked. The science book summarized Hawking’s argument against the existence of God, how humans are likely to live in space one day, and his fears about genetic engineering and global warming.
Artificial intelligence also took a high place on his list of “big questions”, arguing that computers are “likely to surpass humans in intelligence” within 100 years.
“We may be facing an intelligence explosion that will eventually result in machines whose intelligence surpasses ours by more than ours surpasses that of snails,” he wrote.
He argued that computers must be trained to align with human goals, adding that not taking the risks associated with AI seriously could potentially be “our worst mistake ever”.
“It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but that would be a mistake – and potentially our worst mistake ever.”
Hawking’s remarks echo concerns this year from tech giant Elon Musk and Apple co-founder Steve Wozniak in a letter published in March. The two tech leaders, along with thousands of other experts, signed a letter calling for a pause of at least six months on building AI systems more powerful than OpenAI’s GPT-4 chatbot.
“AI systems with human competitive intelligence can pose serious risks to society and humanity, as shown by extensive research and recognized by top AI laboratories,” reads the letter, published by the non-profit organization Future of Life.
OpenAI’s ChatGPT became the fastest growing user base with 100 million monthly active users in January, as people around the world rushed to use the chatbot, which simulates conversations like human based on the prompts given to it. The lab released the latest iteration of the platform, GPT-4, in March.
Despite calls to halt research at AI labs working on technology that would surpass GPT-4, the system’s release served as a watershed moment that rippled through the tech industry and catapulted various companies to compete. for building their own AI systems.
Google is working to overhaul its search engine and even create a new one that relies on AI; Microsoft rolled out ‘new Bing search engine’ described as users’ ‘AI-powered co-driver for web’; and Musk said he would launch a rival AI system which he described as a “maximum search for truth”.
Hawking advised in the year before his death that the world must “learn to prepare for and avoid potential risks” associated with AI, arguing that the systems “could be the worst event in the history of our civilization”. . He noted, however, that the future is still unknown and that AI could prove beneficial to humanity if properly trained.
“Success in creating effective AI could be the greatest event in the history of our civilization. Or the worst. We simply don’t know. ‘AI, or ignored by it and sidelined, or possibly destroyed by it,” Hawking said during a speech at the Web Summit technology conference in Portugal in 2017.
#Stephen #Hawking #warned #human #race #years #death