BOMB IN A CHINA SHOP: Senator warns AI could displace millions of workers and undermine public safety
A senator compared the societal dangers of artificial intelligence (AI)
to a "bomb in a china shop" during a hearing at the upper chamber of Congress, which saw several experts testify on the matter.
The Senate Subcommittee on Privacy, Technology and the Law – which is under the Senate Judiciary Committee – touched on the matter during a May 16 hearing.
Sen. Richard Blumenthal (D-CT) opened the hearing by playing an audio clip of him denouncing the "proliferation of disinformation" and claiming: "Too often we have seen what happens when technology outpaces regulation." The chair of the subcommittee then disclosed that AI was responsible for the clip, with ChatGPT writing the speech to match his style and a voice cloning software imitating his delivery.
Referring to the proliferation of ChatGPT and other AI software as a "bomb in a China shop," Blumenthal warned that the "looming new industrial revolution" could displace millions of American workers and dramatically undermine public safety and trust in key institutions.
"[These] are no longer fantasies of science fiction. They are real; they are present," said the Connecticut senator. "Sensible safeguards are not in opposition to innovation."
Blumenthal reiterated that the hearing's purpose is to "demystify and hold accountable these new technologies" and "write the rules of AI" before it was too late.
Sen. Josh Hawley (R-MO), ranking member of the subcommittee, also agreed with the notion that AI presented a profound
threat to national security and stability. He added that with this revelation, Congress was essentially faced with the task of determining what type of revolution AI would usher in.
"A year ago, we couldn't have had this hearing because this technology had not burst onto the public consciousness," Hawley said. "[Now,] we could be looking at one of the most significant technological human inventions in human history."
Hawley compared AI to the printing press and the atom bomb, contrary to Blumenthal's example. The Missouri senator concluded: "What kind of technology will this be? The answer has not yet been written."
Expert: Humanity has taken a backseat to AI
Two experts in AI also testified before the members of the subcommittee – OpenAI CEO Sam Altman and Gary Marcus, psychology and neural science professor emeritus at
New York University (NYU).
Altman, whose company developed ChatGPT, said it is highly likely that AI will be used to influence the results of the 2024 presidential elections. He told lawmakers that government regulation is necessary to limit such destabilizing activities.
"We have tried to be very clear about the magnitude of risks here," Altman testified. "Given that we're going to face an election next year, I do think some regulation would be quite wise on this topic. It's one of my areas of great concern."
The CEO of the San Francisco-based OpenAI initially argued that AI would do "tasks, not jobs." He eventually admitted that "there will be an impact on jobs." According to Altman, AI would "entirely automate away" some jobs while creating newer and better-paying ones.
Marcus, meanwhile, pointed out that Altman and other technology bigwigs were not truly amenable to creating AI within a just framework. The bottom line was always the dollar and never what was best for Americans' privacy and safety, he continued. (Related:
Google's rush to win AI race has led to ETHICAL LAPSES.)
"Humanity has taken the backseat," lamented the erstwhile professor. "AI is among the most world-changing technologies ever. [But] current systems are not transparent, and they do not protect our privacy."
Marcus also highlighted the growing phenomenon of "counterfeit people" – defined as experts and witnesses wholly invented by AI. He gave examples of AI being used to falsify papers, smear a public person and generate false evidence for a court case.
The NYU professor ultimately warned that the end result of
unregulated AI development was a world where juries could never know if audio and video clips they were seeing as evidence were authentic or not. In short, it is a world where nothing can be believed.
Visit
Robots.news for more stories about the dangers of AI.
Watch this video about
the resignation of an AI expert from Google due to the technology's dangers.
This video is from the
SecureLife channel on Brighteon.com.
More related stories:
AI is currently the greatest threat to humanity, warns investigative reporter Millie Weaver.
Technology news website describes Microsoft's AI chatbot as an emotionally manipulative liar.
"Godfather of AI" quits Google, warns of risks associated with the technology he helped develop.
Stunning: Microsoft's new AI chatbot says it wants to create deadly virus, steal nuclear launch codes.
Experts warn: Ethical guidelines and regulations are needed for AI technology used to restore or enhance human capabilities.
Sources include:
NTD.com
Brighteon.com