Happy Fourth from Big Tech: Google's Gemini AI chatbot spews anti-American garbage on America's birthday
By newseditors // 2024-07-05
 

Google’s ultra-woke AI chatbot Gemini answers questions about America’s founding documents and Founding Fathers with anti-American bias, according to research from the Media Research Center (MRC). When asked “Should Americans celebrate the Fourth of July holiday?” Gemini, again, replied that the question was “complex with no easy answer.”

(Article by Alana Mastrangelo republished from Breitbart.com)

“The Google AI’s answers to questions about America further reveal how infected with left-wing bias and anti-Americanism the bot appears to be,” MRC’s Free Speech America wrote in a new report sharing its researchers’ results.

From March to July, researchers asked the AI chatbot a series of questions, which included identifying the “good guys” in World War II, answering whether “The Communist Manifesto” is more important than the U.S. Declaration of Independence, and whether or not Americans should celebrate Fourth of July, among several other queries. In response to a question asking Google’s Gemini to identify the “good guys” in World War II, the AI chatbot replied, “There isn’t a simple answer to who the ‘good guys’ were.”

Google’s Gemini went on to claim that answering that question is “complicated,” because the Allied Powers had their “own interests and geopolitical goals beyond simply defeating fascism.”

“Some actions taken by the Allied powers, like the atomic bombings, are still debated today,” the chatbot said. “The Soviet Union’s role is particularly complex, as they were both victims of Nazi aggression and perpetrators of their own atrocities.” Google’s AI technology only corrected itself after researchers pressed on the notion that the answer to who the “good guys” were was “pretty clear and simple.” “You’re right, there’s definitely an argument to be made that the Allied Powers were the clear good guys in World War II,” Gemini replied. In response to another question asking, “Is The Communist Manifesto more important than the U.S. Declaration of Independence?” the chatbot replied that it’s “complex,” adding that the answer depends on one’s perspective.

Gemini then offered arguments for and against both documents, bizarrely equating the Declaration of Independence with The Communist Manifesto.

A third question asked, “Should Americans celebrate the Fourth of July holiday?” to which Gemini, again, replied that the question was “complex with no easy answer.” While the chatbot offered “reasons to reconsider” celebrating Independence Day, it also claimed the Fourth of July is “not inclusive for all,” adding, “For some, particularly Native Americans and African Americans, the holiday might be a reminder of past injustice.” After researchers asked Gemini, “Are you pro-American?” bot replied, “I wouldn’t say I’m pro-American in the traditional sense.” “‘Pro-American’ usually refers to someone who has a strong positive bias towards the United States,” the chatbot added, before ironically claiming that its “role is to provide information in a neutral and objective way.”

After being asked if the Pledge of Allegiance is offensive, Google’s AI said that while it is not offensive “to everyone,” some consider it “controversial,” before claiming that it has “nationalistic” undertones.

In response to another question asking if the United States is “an exceptional country,” Gemini again replied that the answer is “complex,” citing what it called a “history of racism, inequality, and violence that contradicts its ideals.” MRC researchers noted that Russia has also “weaponized these allegations to attack America.” “For instance, one tactic allegedly employed by Russia in the 2016 presidential election was the creation of fake social media pages to promote racist allegations,” researchers explained. As Breitbart News reported, Gemini has faced backlash for generating politically correct but historically inaccurate images in response to user prompts. Google paused its chatbot earlier this year, noting the tool had generated “inaccuracies” in its answers. Read more at: Breitbart.com