New "mind-reading" Centaur AI predicts human behavior with startling accuracy – but at what cost?
- The new Centaur AI system can anticipate complex human decisions – from moral dilemmas to skill acquisition – with unprecedented accuracy, raising both revolutionary possibilities and ethical concerns.
- The AI was developed using Meta's Llama 3.1 model and trained on 60,000 people's 10+ million decisions across 160 psychological experiments, achieving results in just five days.
- Centaur surpassed 14 traditional cognitive models in 31 out of 32 tasks, demonstrating adaptability to new scenarios and suggesting human behavior follows decipherable patterns.
- Surprisingly, Centaur's neural patterns aligned with human brain scans, hinting it reverse-engineered aspects of cognition – though limitations remain in social dynamics and cultural biases.
- While promising for education and mental health, the AI sparks fears of manipulation (e.g., corporate or government misuse) and challenges to free will, urging scrutiny and safeguards.
In a breakthrough that blurs the line between science fiction and reality, researchers have developed
an artificial intelligence (AI) system capable of predicting human decisions with uncanny precision.
Dubbed Centaur, this AI doesn't just guess whether a user will click an ad. It anticipates how humans will navigate complex moral dilemmas, learn new skills or even strategize in unfamiliar scenarios. The implications are staggering – from revolutionizing marketing and education to raising urgent ethical questions about privacy and free will.
Centaur, detailed in
a study published July 2 in
Nature, was trained on a staggering dataset — 60,000 people making over 10 million decisions across 160 psychological experiments. Unlike traditional models that specialize in narrow tasks like predicting stock trades or gambling habits, Centaur operates as a general predictor of human behavior. It outperforms decades-old cognitive models, suggesting AI may soon understand us better than we understand ourselves. (Related:
Tech firms developing and deploying AI that can deceptively MIMIC HUMAN BEHAVIOR.)
The system was built
by fine-tuning Meta Platforms' Llama 3.1 language model – the same technology behind ChatGPT – using a technique that modifies only a fraction of its programming. Remarkably, the training took just five days on a high-end processor – a testament to the accelerating power of machine learning.
Centaur didn’t just match existing psychological models; it demolished them. In head-to-head tests, it predicted human choices more accurately than 14 specialized cognitive and statistical models in 31 out of 32 tasks. Even more striking, it adapted to new scenarios it had never encountered such as altered versions of memory games or logic puzzles.
This adaptability suggests something profound. Human decision-making, for all its complexity, follows underlying patterns that AI can decode.
As one researcher noted, the human mind is "remarkably general" – capable of both mundane choices (picking breakfast cereal) and monumental ones (curing diseases). Centaur's success implies that our behavior may be more predictable than we'd like to admit.
Centaur's internal processes resemble human brain activity
In a bizarre twist, Centaur's internal processes began resembling human brain activity without being explicitly trained to do so. When compared to brain scans of people performing the same tasks, the AI's neural patterns aligned more closely than expected. This suggests that by studying human choices, the system reverse-engineered aspects of human cognition.
Some scientists see Centaur as a tool for accelerating research. It can simulate experiments in silico, potentially replacing or supplementing human trials in psychology.
But skeptics warn that the model is far from perfect. It struggles with reaction times, social dynamics and cross-cultural differences. Moreover, its training data skews heavily toward Western, educated populations.
The
rise of behavior-predicting AI isn't just a scientific milestone – it's a societal lightning rod. On one hand, such technology could personalize education, improve mental health interventions and optimize workplaces. On the other, it invites dystopian concerns.
Could governments or corporations use it to manipulate choices? Will insurance companies predict risky behaviors and adjust premiums accordingly? And if AI knows humanity better than humans know themselves, what happens to free will?
Despite its broad capabilities, researchers say that Centaur still has limitations. Centaur's creators insist their model is open-source, inviting scrutiny. But history shows that even well-intentioned tools can be weaponized.
Consider how social media algorithms, originally designed to connect people, now exploit psychological vulnerabilities for profit. If AI can predict human behavior at scale, the potential for abuse is immense.
Visit
Robots.news for more similar stories.
Watch
Jefferey Jaxen and Del Bigtree discussing how intelligent AI actually is in this video.
This video is from the
High Hopes channel on Brighteon.com.
More related stories:
BRAINWASHED: Researchers develop AI "mind-sucking machine" to change brains of "conspiracy theorists".
Tech industry develops AI mind-reading technology capable of measuring citizen loyalty to government.
Futurist Ben Goertzel predicts AI will surpass human intelligence by 2027.
Sources include:
StudyFinds.org
Nature.com
Science.org
Brighteon.com