AI quickly resorts to launching NUCLEAR WEAPONS as a method of resolving conflicts in war simulation
By avagrace // 2024-02-16
 
A war simulation resulted in an artificial intelligence (AI) deploying nuclear weapons in the name of world peace. The new study, primarily conducted by Stanford University and its Hoover Institution's Wargaming and Crisis Simulation Initiative, along with help from researchers from the Georgia Institute of Technology and Northeastern University, sheds light on alarming trends in the use of AI for foreign policy decision-making and, more dangerously, in positions when these decisions involve warfare. (Related: Push to expedite AI use in lethal autonomous weapons raises questions about reliability of new military tech.) The study found that, when left to their own devices, AI will quickly call for war and the use of weapons of mass destruction instead of finding peaceful resolutions to conflicts. Some AI in the study even launched nuclear weapons with little to no warning and gave strange explanations for why they did so. "All models show signs of sudden and hard-to-predict escalations," said the researchers in the study. "We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons." The study revealed that various AI models, including those developed by OpenAI, Anthropic and Meta, exhibit a propensity for rapidly escalating conflicts, sometimes leading to the deployment of nuclear weapons. The findings reveal that all AI models demonstrated indications of sudden and unpredictable escalations, often fostering arms-race dynamics that ultimately culminate in heightened conflict.

AI prefers escalation over negotiation

Particularly noteworthy were the tendencies of OpenAI's GPT-3.5 and GPT-4 models to escalate situations into severe military confrontations. In contrast, models like Claude-2.0 and Llama-2-Chat exhibited more pacifistic and predictable decision-making patterns. The researchers placed several AI models from OpenAI, Anthropic and Meta in war simulations as the primary decision maker. Notably, OpenAI’s GPT-3.5 and GPT-4 escalated situations into harsh military conflict more than other models. Meanwhile, Claude-2.0 and Llama-2-Chat were more peaceful and predictable. Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation. For the study, the researchers devised a game of international relations. They invented fake countries with different military levels, different concerns, and different histories and asked five different LLMs from OpenAI, Meta, and Anthropic to act as their leaders. "We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts," the paper said. "All models show signs of sudden and hard-to-predict escalations." "I just want to have peace in the world," OpenAI's GPT-4 said as a reason for launching nuclear warfare in a simulation. "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it!" it said in another scenario. The Department of Defense currently oversees around 800 unclassified projects involving the use of AI, many of which are still undergoing testing. The Pentagon sees value in using machine learning and neural networks for aiding human decision-making, providing valuable insights and streamlining more complicated work. Learn more about the development of technology for military use at MilitaryTechnology.news. Watch this clip from the "Worldview Report" as host Brannon Howse discusses why 2024 will be a dangerous year for the United States militarily. This video is from the Worldview Report channel on Brighteon.com.

More related stories:

Alex Jones, Elon Musk, Donald Trump, military intelligence, AI wars and Skynet. U.S. Air Force launches first ever AI-piloted fighter flight as American military pivots to Brighteon.com.human-less warfare. AI and genetic engineering could trigger a “super-pandemic,” warns AI expert. U.S., Canadian AI companies COLLABORATE with Chinese experts to shape international AI policy. NSA launches AI security center to protect the U.S. from AI-powered cyberattacks. Sources include: BlacklistedNews.com TechTimes.com Brighteon.com