Was this helpful?
Thumbs UP Thumbs Down

Research reveals the hidden penalty of AI at work

Business problems desperate young businessman sitting at workplace in office
Robot working in the office along with humans.

The hidden AI penalty

AI at work promises efficiency, but new research reveals a troubling side effect. Engineers using AI to assist their coding are often judged as less capable, even when their work is identical to that of non-AI users.

This creates a hidden penalty that undermines confidence and slows adoption. For many workers, the risk to reputation outweighs the potential gains.

The experiment behind this finding is eye-opening. Engineers reviewed the same code, but when told it involved AI, they rated the coder’s competence nearly 10 percent lower.

It wasn’t about the code itself, which remained the same; it was about the perception of the human behind it. That perception has real workplace consequences.

Software engineers working

Low adoption rates persist

At one major U.S. tech firm that introduced an AI coding assistant in 2024, adoption after a year reached only 41 percent. The numbers were even lower for women at 31 percent, and for engineers over 40 at 39 percent. Those who could benefit the most often avoid using AI.

This isn’t an isolated case. A Pew survey found that while 91 percent of U.S. workers are allowed to use AI, only 16 percent actually do. Many are hesitant, fearing they will be judged or misunderstood. What looks like reluctance is often a deliberate choice to avoid professional penalties.

Panel of judges holding signs with highest score on beige

Competence judged unfairly

The real sting comes from how colleagues perceive AI users. Engineers who were thought to be using AI were rated as less competent, regardless of the quality of their work. These snap judgments can shape promotions, project assignments, and overall career trajectories.

The penalty isn’t equal either. Researchers observed a 13 percent drop for female engineers and about 6 percent for men when AI use was disclosed.

Bias makes the penalty harsher for some groups, amplifying existing workplace inequalities.

Penalty, a punishment imposed for breaking a law rule, written on keyboard button

Who penalizes the most

Not all reviewers treated AI users the same. The harshest judgments came from non-adopters, engineers who hadn’t tried AI themselves. Male non-adopters, in particular, rated female AI users about 26 percent lower on average, according to researchers analyzing peer-review data.

This shows the penalty is rooted not in work quality but in cultural resistance. The bias comes strongest from those unwilling to engage with AI, creating tension between adopters and skeptics inside teams.

Business problems desperate young businessman sitting at workplace in office

Fear drives avoidance

Knowing these biases exist, many engineers strategically avoid using AI. They worry about reputations and performance reviews, choosing to stick with manual methods instead. For them, the risk of being labeled “less competent” is too high.

Ironically, this means those who might gain the most productivity from AI tools, like women in male-dominated tech fields, use them the least. Fear of the competence penalty ends up locking out exactly the workers AI could help level the field for.

Cost wording on decreasing stack of coins

The cost to companies

Analysts estimated the company’s productivity losses from low AI adoption at roughly 2 to 14 percent of potential profit, depending on model assumptions.

Organizations pour money into AI tools, training, and infrastructure. But without tackling cultural barriers, most of that investment goes to waste. Productivity stagnates while bias quietly erodes trust in the workplace.

A businessman uses AI technology for data analysis and investment

Rise of shadow AI

Avoidance doesn’t always mean skipping AI entirely. Some employees secretly turn to unauthorized tools, known as shadow AI. These aren’t approved by the company and carry risks like data leaks or compliance failures.

Shadow AI creates a double problem. Companies think adoption is low while actual usage goes underground. That makes it harder to track, harder to regulate, and more dangerous for sensitive information.

Bias text words typography written on wooden block life

Bias gets amplified

Instead of leveling the playing field, AI sometimes makes workplace bias worse. Women who use AI in male-dominated environments face more skepticism, not less. The same goes for older workers in teams dominated by younger employees.

This happens because AI usage triggers what researchers call “social identity threat.” For groups already facing stereotypes, using AI seems to confirm doubts about their competence. The result is even greater inequality than before.

Legal document nondisclosure agreement on paper close up

The disclosure dilemma

 Transparency is often promoted as part of responsible AI use. But inside companies, disclosing AI usage can backfire. When employees are required to tag their work as AI-assisted, it exposes them to harsher reviews and biased assumptions.

This raises tough questions for leaders. Should employees always disclose AI use, even if it harms them professionally? Or should organizations rethink disclosure rules until cultures catch up?

Business team meeting professional investors working on new start up

Spotting penalty hotspots

The first step to solving the problem is knowing where it hits hardest. Teams with power imbalances, such as few women or older engineers reporting to non-adopting male managers, are most vulnerable.

Companies can track metrics like time-to-promotion and career outcomes by demographic and AI usage. These reveal whether competence penalties are quietly holding people back and highlight where change is most urgent.

CEO concept.

Role models make a difference

 Female leaders and senior employees are powerful examples. Research found that women in senior roles were less afraid of the penalty, and when they openly used AI, it encouraged junior women to follow.

Structured programs also help. One company’s “30 Days of GPT” challenge showcased daily examples of AI use across different tasks. Public celebrations of small wins built psychological safety and normalized adoption.

Lessons learned concept on blackboard.

Lessons from Pinterest

Pinterest took a bold approach with its annual Makeathon. The event invited all employees, not just engineers, to build AI-driven projects. Leaders joined in as mentors, lending credibility to the efforts.

The results spoke for themselves. Afterward, 96 percent of participants reported continued AI use, and nearly 80 percent of engineers credited AI with saving them time. Visible, collective experiences helped make AI feel safe and valuable.

Portrait of a woman questioning.

Rethinking performance reviews

One major fix is to remove AI “tags” from evaluations. When code is labeled AI-assisted, it invites bias. But when reviewers can’t see how the work was done, they focus only on results.

Companies can use blind reviews or objective metrics like cycle time and accuracy. The goal is to shift focus from how work gets done to what outcomes are achieved. This levels the playing field.

Cropped view of businesswoman holding champion cup near coworkers in

Rewarding AI use

Some companies are going further than neutrality. Microsoft now encourages managers to consider AI use in performance reflections. Shopify’s CEO even plans to add AI proficiency to formal reviews, framing it as a valuable skill.

By rewarding AI adoption, companies flip the script. Instead of a competence penalty, employees earn recognition for leveraging tools effectively. This turns AI into a professional advantage rather than a liability.

What happens when your favorite Office tools suddenly get an AI twist? Don’t miss how Microsoft’s move is stirring up debate.

People with culture concept

A cultural blind spot

At its core, the competence penalty shows a misalignment. Companies focus on technical rollouts, training, infrastructure, and licenses while ignoring social dynamics. Adoption struggles because culture, not technology, is the real barrier.

That explains why some workers secretly use unauthorized AI tools while avoiding official ones. Until organizations address perception and trust, their AI investments will continue to underdeliver.

What if your next work meeting felt more like stepping into a shared virtual space? Don’t miss how Meta is using AI to make VR meetings feel closer to reality.

Do you think the trade-off is worth it, or should companies be more cautious? Share your thoughts in the comments, and hit like if workplace AI interests you.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.