U.K. uses AI developed by Amazon to read people's moods at train stations
A series of artificial intelligence (AI) trials in the U.K. involving thousands of train passengers who were unwittingly subjected to emotion-detecting software
has raised privacy concerns.
The technology, developed by Amazon and employed at various major train stations, including London's Euston and Waterloo, used AI to scan faces and assess emotional states along with age and gender. Documents obtained by the civil liberties group Big Brother Watch through a Freedom of Information request uncovered these practices, which might soon influence advertising strategies.
These trials used CCTV technology and older cameras linked to cloud-based systems to monitor a range of activities, including detecting trespassing on train tracks, managing crowd sizes on platforms and identifying antisocial behaviors such as shouting or smoking. The trials even monitored potential bike theft and other safety-related incidents. (Related:
AI surveillance tech can find out who your friends are.)
The data derived from these systems could be utilized to enhance advertising revenues by gauging passenger satisfaction through their emotional states, captured when individuals crossed virtual tripwires near ticket barriers. Critics argue that the technology is unreliable and have called for its prohibition.
Over the past two years, eight train stations around the U.K. have tested AI surveillance technology, with CCTV cameras intending to alert staff to safety incidents and potentially reduce certain types of crime.
The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition – a type of machine learning that can identify items in video feeds. Separate trials have used wireless sensors to detect slippery floors, full bins and drains that may overflow.
"The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step," said Jake Hurfurt, research and investigations head of Big Brother Watch.
Using technology to detect emotions is unreliable
AI researchers have frequently warned that using the technology to detect emotions is "unreliable," and some say the technology should be banned due to the difficulty of working out how someone may be feeling from audio or video. In October 2022, the U.K.'s data regulator, the Information Commissioner's Office, issued a public statement warning against the use of emotion analysis, saying the technologies are "immature" and "they may not work yet, or indeed ever."
Privacy advocates are particularly alarmed by the opaque nature and the potential for overreach in the use of AI in public spaces. Hurfurt has expressed significant concerns about the normalization of such invasive surveillance without adequate public discourse or oversight.
"Network Rail had no right to deploy discredited emotion recognition technology against unwitting commuters at some of Britain's biggest stations, and I have submitted a complaint to the Information Commissioner about this trial," Hurfurt said.
"It is alarming that as a public body, it decided to roll out a large-scale trial of Amazon-made AI surveillance in several stations with no public awareness, especially when Network Rail mixed safety tech in with pseudoscientific tools and suggested the data could be given to advertisers," he added.
"Technology can have a role to play in making the railways safer, but there needs to be a robust public debate about the necessity and proportionality of tools used. AI-powered surveillance could put all our privacy at risk, especially if misused, and Network Rail's disregard of those concerns shows a contempt for our rights."
Visit
FutureTech.news for similar stories.
Watch
Stanford University professor Michael Snyder explaining
how AI is used to collect data on people below.
This video is from the
Nonvaxer420 channel on Brighteon.com.
More related stories:
Google censors all AI that generates supposed "hate speech" in Google Play Store.
Former OpenAI employees release “A Right to Warn” document warning about advanced AI risks.
Database leak reveals ALARMING list of Google’s privacy blunders, including recording children’s voices and exposing license plates seen on Street View.
FASCIST INCEST: Ex-NSA chief joins board of OpenAI to expand tentacles of military-industrial complex.
AI candidate running for U.K. parliament to appear on the ballot for general elections in July.
Sources include:
ReclaimTheNet.org
Brighteon.com