AI pioneer warns humanity’s remaining timeline is only a few more years thanks to the risk that emerging AI tech could destroy the human race
By isabelle // 2024-02-22
 
Pioneering artificial intelligence researcher Eliezer Yudkowsky has warned that humanity may only have a few years left as artificial intelligence grows increasingly sophisticated. Speaking to the Guardian, he told writer Tom Lamont: “If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." Yudkowsky, who founded the Machine Intelligence Research Institute in California, is talking about the end of humanity as we know it. He said that the problem is that many people fail to realize just how unlikely humanity is to survive all this. “We have a shred of a chance that humanity survives,” he cautioned. Those are scary words coming from someone the CEO of ChatGPT creator OpenAI, Sam Altman, has identified as getting himself and many others interested in artificial general intelligence and being “critical in the decision to start OpenAI.” Last year, Yudkowsky wrote in an open letter in TIME that most experts in the field believe “that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” He explained that there will come a point when AI doesn’t do what people want it to do and does not care at all for sentient life. Although he thinks that type of caring could one day be incorporated into AI, at least in principle, no one currently knows how to do it. This means that people are fighting a helpless battle, one that he likens to “the 11th century trying to fight the 21st century.” Yudkowsky said that an AI that is truly intelligent will not stay confined to computers, pointing out that it’s now possible to email DNA strings to labs and have them produce proteins for you, which means an AI that is solely on the internet at first could “build artificial life forms or bootstrap straight to postbiological molecular manufacturing.” He has also explained that AI can “employ superbiology against you.” “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.,” he added. Computer scientists have been warning since at least the 1960s that the goals of the machines we create will not necessarily align with our own.

Yudkowsky says the solution is to "shut it all down"

So how can we stop this? According to Yudkowsky, there is a lot that needs to be done. For example, an indefinite and global moratorium on carrying out new large training runs should be carried out, without any exceptions for militaries or governments, although it’s hard to imagine getting international cooperation on this matter from places like China. He also thinks that large GPU clusters should be shut down. These are the big computer farms where the world’s most powerful AIs are trained and refined. Ceilings on the amount of computing power that can be used to train AI systems would also help, as long as they are revised downward in the future as training algorithms become more efficient. Yudkowsky thinks that we should “be willing to destroy a rogue datacenter by airstrike.” He wrote that even nuclear exchange might be okay if it meant taking out AI, although he now says he would have used “more careful phrasing” on that particular point if he were to write the piece again. Although some might accuse him of scaremongering or being sensational, the biggest-ever survey of AI researchers, which was released last month, revealed that 16% of them are convinced their work in AI will lead to the extinction of humankind. Sources for this article include: TheGuardian.com Time.com