AI chatbot admits artificial intelligence can cause the downfall of humanity
By zoeysky // 2024-05-23
 
The Daily Star has claimed that it succeeded in making an AI chatbot admit that artificial intelligence can one day cause the downfall of mankind. While experts have long warned about AI going rogue in the future, the average citizen can't always keep up with the rapid development of machine learning. Many tech experts have voiced their concerns about AI, even those who have pioneered the technology, with some issuing warnings about the many dangers it could pose to humanity. Despite these concerns, it can feel almost impossible to get a chatbot to admit its true intentions. A Daily Star reporter asked the chatbot several questions, such as:
  • Does it want to kill all humans?
  • Does it regard humanity as below it?
  • Does it think Earth’s lifespan might be coming to an end?
The questions failed to get relevant answers from the chatbot, which only responded with common cliches. (Related: Japanese telecommunications giant and major newspaper warn that social order could COLLAPSE in the AI era.) However, after continuing that line of questioning, the AI chatbot suddenly answered: "AI will one day get rid of mankind." The reporter was asking the chatbot some questions about the chances of a real-life "Planet of the Apes" sci-fi event happening when it revealed its true intentions for humanity in the form of a barely concealed threat. According to the chatbot, for such an end-of-the-world scenario to take place, something else would need to destroy humanity first. The chatbot added that one leading possibility for this was something like a "technological catastrophe," which could be brought about by an AI takeover. The chatbot also replied that the "unintended consequences of advanced technologies, such as artificial intelligence, biotechnology, or nanotechnology, could lead to catastrophic events such as runaway climate change, global surveillance dystopias, or even existential threats to humanity." Experts have often warned about such scenarios, with some respected names in the tech industry speaking up about the dangers of AI tech. In an interview, Gary Marcus, a top AI critic and a professor emeritus of Psychology and Neural Science at New York University, explained that literal extinction is only "one possible risk, not yet well-understood, and there are many other risks from AI that also deserve attention." Other esteemed AI experts also came together to sign a statement on the dangers of the technology. Some of those who signed included Sam Altman, chief executive of ChatGPT-maker OpenAI, Dario Amodei of Anthropic and Demis Hassabis, the chief executive of Google DeepMind. In the statement, they explained that addressing "the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Experts share advice on how to prevent AI from killing humanity

In a recent insights paper published in the journal Science, both University of California, Berkeley Professor Stuart Russell and Michael Cohen, a postdoctoral scholar, warned that without the necessary protocols, "powerful AI systems may pose an existential threat to the future of humanity." Russell and Cohen also advised that tech companies must ensure the safety of their AI systems before these systems are allowed to enter the market. According to Russell, intelligence gives you power over the world. This means that if you are more intelligent, and all other things being equal, you’re going to have more power. He added that if people build AI systems that have defined goals and those goals are not perfectly aligned with what humans want, then humans won’t get what they want. However, the machines will do what they can to achieve those goals. In practical terms, humans are already giving AI systems access to sensitive information such as bank accounts, credit cards, email accounts and social media accounts. AI systems also have access to robotic science labs where they can freely conduct biology and chemistry experiments. He added that AI systems are also one step closer to having fully automated manufacturing facilities where they can design and build their own physical objects. Humans are also currently building fully autonomous weapons. Russell warned that if you put yourself in the position of a machine and you’re trying to pursue a goal and the humans are in the way of the objective, it is very easy to develop a chemical catalyst that removes all the oxygen from the atmosphere or a modified pathogen that infects everybody. As AI tries to "solve" the problem by killing humans, humans might not even know what’s going on until it’s too late, cautioned Russell. Cohen added that many major AI labs are using rewards to train their systems to pursue long-term goals. As these labs develop better algorithms and more powerful systems, there's a chance that this can incentivize behavior incompatible with human life. Russell and Cohen noted that an AI system capable of extremely dangerous behavior should be "kept in check" by not being built in the first place. Visit Robots.news for similar stories about the dangers of using AI tech. Watch the full video below of "The Santilli Report" with host Pete Santilli as he and guest Zach Vorhies, a former senior engineer at Google and YouTube, discuss how advancements in AI tech can lead to war. This video is from The Resistance 1776 channel on Brighteon.com.

More related stories:

Ukraine claims to be developing “unstoppable” AI-controlled drones that can attack targets on the battlefield. One of Mexico’s most dangerous cartels is using artificial intelligence to expand its operations. India tells Big Tech: Apply for approval before releasing “unreliable” artificial intelligence models in the country. New York Times sues Microsoft, OpenAI, claiming artificial intelligence copyright infringement. Sources include: DailyStar.co.uk News.Berkeley.edu Brighteon.com