OpenAI whistleblower speaks out on the rise of superintelligence and international safety concerns, suggests U.S. leaders should take control
A former OpenAI employee has been speaking out on the
future ramifications of an unregulated Artificial General Intelligence (AGI). There are fears that artificial intelligence will soon match or surpass human capabilities and begin to program itself, or be leveraged by foreign adversaries, to inflict harm on people around the world.
The author of the report, Leopold Aschenbrenner, sent out a frightening message about AGI coming to prominence by 2027. He suggests that American leaders should take charge of this emerging superintelligence.
"We will need the government to deploy super intelligence to
defend against whatever extreme threats unfold, to make it through the extraordinarily volatile and destabilized international situation that will follow," he said. "We will need the government to mobilize a democratic coalition to win the race with situational awareness authoritarian powers, and forge (and enforce) a nonproliferation regime for the rest of the world."
If Biden and the Democrats take charge, then the people overseeing superintelligence will be the most incompetent, unethical and dangerous people
to possess these highly volatile weapons of war. Even more interesting, the superintelligence is projected to outpace the intelligence of even the smartest humans today.
Superintelligence only years away, will likely be used to exploit populations
Aschenbrenner's report describes the very real development of AGI by 2030, predicting how the situation may unfold, particularly in the context of national security and government involvement. It outlines a scenario where the race towards AGI intensifies, leading to a point where government intervention becomes inevitable for managing the risks and harnessing the potential of superintelligence.
Aschenbrenner believes the U.S. government will need to take a central role in its development of AGI due to the immense national security implications. Private startups will not be able to handle such a monumental task on their own, he posits. He draws parallels between the development of AGI and the Manhattan Project, suggesting that a similar level of government intervention and coordination will be necessary.
AGI is seen as a
technology that will fundamentally alter the military’s balance of power and will require significant adaptation in national defense strategies. These military challenges include espionage threats, the potential for destabilizing international competition, and the need for a sane chain of command that can navigate safety issues, superhuman hacking capabilities and international human rights concerns.
While the government must have a national security interest as superintelligence expands, Aschenbrenner suggests that government officials will need to work with experts in the industry to solve the challenges. For example, government officials will have to rely on the expertise of AI labs and cloud computing providers, through joint ventures or defense contracts.
Preventing future malevolence of AGI will be a difficult task that grows more challenging with each day
To prevent future malevolence, Aschenbrenner suggests that no single CEO or private entity should have unilateral command over superintelligence. Such a scenario could pose significant risks, including the potential for abuse of power and the undermining of democratic principles. Transparency of AI advancements is key to preventing a rogue state from using the technology to seize power.
Superintelligence is likened to the most powerful military weapon, and thus, its control should be subject to democratic governance. The former OpenAI researcher suggests the formation of an ethical chain of command that can offer some form of checks and balances, so that rogue actors and government operatives can be held accountable when there are ethical concerns. According to Aschenbrenner, this chain of command will need to be in cooperation with the U.S. intelligence community to ensure that adequate safeguards are in place. Even in this scenario, the intelligence community could find ways to violate civil liberties and exploit populations for "collective" benefits to society or for the benefit of special interests.
The overarching message in all this is that it will be practically impossible to stop the development of superintelligence and prevent it from being misused. Ensuring that superintelligence aligns with human values will be a difficult task that grows harder with each passing day. The potential for misuse and exploitation will grow as the technology outpaces its engineers and regulators. Aschenbrenner suggests that regulation alone may not be sufficient to address these challenges and emphasizes the need for competent governance capable of making difficult decisions in rapidly evolving situations. The U.S. does not currently have this kind of leadership, so the
dangers ahead are very real.
"The Project"
Aschenbrenner says super intelligent AI is not just another technological advancement by Silicon Valley, but it is something much more powerful that will profoundly impact global security and stability. He brought up something called "The Project" which will pool international experts together to solve these challenges. He said America should lead this collaborative effort and rapidly scale up AI capabilities so that the core infrastructure remains under American control, and not in China.
In the report, Aschenbrenner lays out "The Project":
Whoever they put in charge of The Project is going to have a hell of a task:
- To build AGI, and to build it fast
- To put the American economy on wartime footing to make hundreds of millions of GPUs
- To lock it all down, weed out the spies, and fend off all-out attacks by the CCP
- To somehow manage a hundred million AGIs furiously automating AI research, making a decade’s leaps in a year
- Producing AI systems vastly smarter than the smartest humans; to somehow keep things together enough that this doesn’t go off the rails
- Produce AI systems that prevent rogue superintelligence from seizing control from its human overseers
- To use those superintelligences to develop whatever new technologies will be necessary to stabilize the situation and stay ahead of adversaries
- Rapidly remake U.S. forces to integrate all these AI enhancements, all while navigating what will likely be the tensest international situation ever seen.
Sources include:
SituationalAwareness.ai [PDF]
Brighteon.com