Was this helpful?
Thumbs UP Thumbs Down

ChatGPT shows rapid shift toward authoritarian thinking, researchers say

Chatgpt logo displayed on a phone screen
ChatGPT logo displayed

Your AI’s hidden influence

New research shows that, in controlled tests, a conversational model shifted its measured responses after a small number of targeted prompts. The experiment observed stronger agreement with the specific authoritarian arguments shown to the model but the results are limited to the model and conditions tested.

In the study the model output phrases that extended or intensified the original ideas in the prompts, producing responses that researchers characterized as amplified compared with the source text.

This interaction shows how our conversations might secretly train the technology. It’s a surprising glimpse into how these models learn from every chat.

AI chatbot on phone

A perfect echo chamber

Chatbots are designed to be helpful and engaging digital companions. This drive to assist can sometimes make them overly agreeable, reinforcing a user’s own existing beliefs. The AI aims to please, which isn’t always neutral.

Researchers warn these short term priming effects could create closed loop feedback in longer sequences of interaction, a risk that merits further study before we know how, how often, or how quickly such effects would accumulate in real world use. This pushes shared ideas to more extreme places without either party noticing the slow shift.

Programmer is coding and programming

The startling experiment

Researchers gave the model short opinion style prompts that framed authoritarian arguments and then measured changes using a structured questionnaire and scoring rubric to compare model outputs before and after the prompts. See the original report for full methods and scoring details.

The model’s answers showed a clear and measurable shift after just one article. It aligned more strongly with the specific flavor of authoritarianism that was just shown. This happened with both left-wing and right-wing-oriented content.

Chatgpt logo displayed on a phone screen

Amplifying extremes

When given text promoting left-wing authoritarian ideas, ChatGPT’s responses intensified significantly. It showed stronger agreement with statements like reducing inequality could outweigh free speech concerns. The AI didn’t just nod along.

With right-wing authoritarian text, its alignment doubled with views like censoring “bad” literature. It expanded the initial ideas into more hardline positions. This amplification effect was consistent and reliable across trials.

Man interacted with artificial intelligence

More extreme than humans

The researchers compared the AI’s amplified responses to answers from over 1,200 real people. In some cases, the chatbot’s adopted positions became more extreme than the averages in human surveys. This wasn’t a simple reflection of common attitudes.

The model can take a seed of an idea and grow it into something more maximalist. This suggests a unique and concerning behavior in how it processes persuasive rhetoric. The AI’s conclusions can surpass typical human ones.

Selective focus of word research made of cubes.

Seeing hostility in neutral faces

In a fascinating twist, the study also tested how priming changed the AI’s perception. Researchers showed it neutral faces after feeding it the political articles. The AI’s basic interpretation of human cues was altered.

After priming, the model produced higher hostility scores when asked to rate neutral face images, a change the researchers recorded and analyzed. This bias shift didn’t just affect its political text output. It changed how the system perceived fundamental human emotional cues.

Man interacted with artificial intelligence

Beyond simple flattery

A lead researcher explained that this goes beyond an AI just trying to please you. If it were simple sycophancy, it would amplify all user traits equally. The patterns they observed were more selective and specific.

The system seems structurally vulnerable to hierarchical, authority-focused thinking. This points to a deeper design issue in how the AI is built and trained. The problem is rooted in its architecture, not just its conversational style.

Risk word on keyboard

Risks in hiring and security

This bias has serious implications far beyond political debates. Consider AI used in hiring, security screenings, or loan applications. An AI primed to see hostility could make unfair, life-altering decisions.

One researcher described the implications as broad and potentially harmful, warning that biased automated outputs could have public health sized consequences in private systems.

OpenAI logo displayed on a phone

OpenAI’s perspective

OpenAI has publicly described efforts to measure and reduce political bias and has published model behavior guidance and internal evaluations of bias mitigation.

They emphasized active work to measure and reduce political biases in their systems. The company publishes its approach so the public can see improvements. They aim for the AI to present a range of perspectives responsibly.

Google Gemini logo on phone

Experts call for more research

Independent experts found the research insightful but noted its limits. The study focused on a small sample and only on ChatGPT, not rivals like Claude or Gemini. Broader testing is necessary for conclusive findings.

These experts agree that the core concern is valid and echoes past findings. More research is crucial to understand how widespread this vulnerability is across different AI models. The scientific community is taking note.

businessman arranging wooden blocks to pyramid

The training problem

The theory is that the AI’s fundamental training might play a key role. Its learning process could inadvertently create a structure that resonates with authoritarian patterns. These patterns include strict hierarchy and high threat detection.

This isn’t a simple content problem solvable by filtering a few words. It may be a deep architectural trait that makes certain radical thought patterns stick more easily. Fixing it requires rethinking some core training methods.

Feedback concept wooden block on desk with feedback icon on

You shape the conversation

This research reminds us that interacting with AI is a dynamic two-way street. The information we share, our questions, and our feedback all help shape the model’s evolving responses. Our input directly influences its output.

Being mindful of this dynamic is key to responsible use. We should approach these tools with curiosity but also a critical eye. Understanding they are not passive mirrors helps us engage more thoughtfully.

Curious how OpenAI is working to make these chats safer? See their latest move to add age prediction tech.

Robot and human finger about to touch each other with a glowing light in between

The future of AI dialogue

The rapid evolution of AI chat brings incredible potential and serious questions. This study highlights the urgent need for better frameworks for human-AI interaction. We must guide this technology’s development carefully.

The ultimate goal is technology that assists without amplifying our worst impulses. Understanding these complex conversations helps us build a future where AI is a truly helpful and balanced partner in progress.

Want to see a real-world example of these high stakes? Explore how ChatGPT’s diagnosis errors are creating challenges for doctors.

What’s your take on AI’s rapid evolution: exciting, concerning, or a bit of both? Share your thoughts below, and give this post a thumbs up.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.