7 min read
7 min read

AI at work promises efficiency, but new research reveals a troubling side effect. Engineers using AI to assist their coding are often judged as less capable, even when their work is identical to that of non-AI users.
This creates a hidden penalty that undermines confidence and slows adoption. For many workers, the risk to reputation outweighs the potential gains.
The experiment behind this finding is eye-opening. Engineers reviewed the same code, but when told it involved AI, they rated the coder’s competence nearly 10 percent lower.
It wasn’t about the code itself, which remained the same; it was about the perception of the human behind it. That perception has real workplace consequences.

At one major U.S. tech firm that introduced an AI coding assistant in 2024, adoption after a year reached only 41 percent. The numbers were even lower for women at 31 percent, and for engineers over 40 at 39 percent. Those who could benefit the most often avoid using AI.
This isn’t an isolated case. A Pew survey found that while 91 percent of U.S. workers are allowed to use AI, only 16 percent actually do. Many are hesitant, fearing they will be judged or misunderstood. What looks like reluctance is often a deliberate choice to avoid professional penalties.

The real sting comes from how colleagues perceive AI users. Engineers who were thought to be using AI were rated as less competent, regardless of the quality of their work. These snap judgments can shape promotions, project assignments, and overall career trajectories.
The penalty isn’t equal either. Researchers observed a 13 percent drop for female engineers and about 6 percent for men when AI use was disclosed.
Bias makes the penalty harsher for some groups, amplifying existing workplace inequalities.

Not all reviewers treated AI users the same. The harshest judgments came from non-adopters, engineers who hadn’t tried AI themselves. Male non-adopters, in particular, rated female AI users about 26 percent lower on average, according to researchers analyzing peer-review data.
This shows the penalty is rooted not in work quality but in cultural resistance. The bias comes strongest from those unwilling to engage with AI, creating tension between adopters and skeptics inside teams.

Knowing these biases exist, many engineers strategically avoid using AI. They worry about reputations and performance reviews, choosing to stick with manual methods instead. For them, the risk of being labeled “less competent” is too high.
Ironically, this means those who might gain the most productivity from AI tools, like women in male-dominated tech fields, use them the least. Fear of the competence penalty ends up locking out exactly the workers AI could help level the field for.

Analysts estimated the company’s productivity losses from low AI adoption at roughly 2 to 14 percent of potential profit, depending on model assumptions.
Organizations pour money into AI tools, training, and infrastructure. But without tackling cultural barriers, most of that investment goes to waste. Productivity stagnates while bias quietly erodes trust in the workplace.

Avoidance doesn’t always mean skipping AI entirely. Some employees secretly turn to unauthorized tools, known as shadow AI. These aren’t approved by the company and carry risks like data leaks or compliance failures.
Shadow AI creates a double problem. Companies think adoption is low while actual usage goes underground. That makes it harder to track, harder to regulate, and more dangerous for sensitive information.

Instead of leveling the playing field, AI sometimes makes workplace bias worse. Women who use AI in male-dominated environments face more skepticism, not less. The same goes for older workers in teams dominated by younger employees.
This happens because AI usage triggers what researchers call “social identity threat.” For groups already facing stereotypes, using AI seems to confirm doubts about their competence. The result is even greater inequality than before.

Transparency is often promoted as part of responsible AI use. But inside companies, disclosing AI usage can backfire. When employees are required to tag their work as AI-assisted, it exposes them to harsher reviews and biased assumptions.
This raises tough questions for leaders. Should employees always disclose AI use, even if it harms them professionally? Or should organizations rethink disclosure rules until cultures catch up?

The first step to solving the problem is knowing where it hits hardest. Teams with power imbalances, such as few women or older engineers reporting to non-adopting male managers, are most vulnerable.
Companies can track metrics like time-to-promotion and career outcomes by demographic and AI usage. These reveal whether competence penalties are quietly holding people back and highlight where change is most urgent.

Female leaders and senior employees are powerful examples. Research found that women in senior roles were less afraid of the penalty, and when they openly used AI, it encouraged junior women to follow.
Structured programs also help. One company’s “30 Days of GPT” challenge showcased daily examples of AI use across different tasks. Public celebrations of small wins built psychological safety and normalized adoption.

Pinterest took a bold approach with its annual Makeathon. The event invited all employees, not just engineers, to build AI-driven projects. Leaders joined in as mentors, lending credibility to the efforts.
The results spoke for themselves. Afterward, 96 percent of participants reported continued AI use, and nearly 80 percent of engineers credited AI with saving them time. Visible, collective experiences helped make AI feel safe and valuable.

One major fix is to remove AI “tags” from evaluations. When code is labeled AI-assisted, it invites bias. But when reviewers can’t see how the work was done, they focus only on results.
Companies can use blind reviews or objective metrics like cycle time and accuracy. The goal is to shift focus from how work gets done to what outcomes are achieved. This levels the playing field.

Some companies are going further than neutrality. Microsoft now encourages managers to consider AI use in performance reflections. Shopify’s CEO even plans to add AI proficiency to formal reviews, framing it as a valuable skill.
By rewarding AI adoption, companies flip the script. Instead of a competence penalty, employees earn recognition for leveraging tools effectively. This turns AI into a professional advantage rather than a liability.
What happens when your favorite Office tools suddenly get an AI twist? Don’t miss how Microsoft’s move is stirring up debate.

At its core, the competence penalty shows a misalignment. Companies focus on technical rollouts, training, infrastructure, and licenses while ignoring social dynamics. Adoption struggles because culture, not technology, is the real barrier.
That explains why some workers secretly use unauthorized AI tools while avoiding official ones. Until organizations address perception and trust, their AI investments will continue to underdeliver.
What if your next work meeting felt more like stepping into a shared virtual space? Don’t miss how Meta is using AI to make VR meetings feel closer to reality.
Do you think the trade-off is worth it, or should companies be more cautious? Share your thoughts in the comments, and hit like if workplace AI interests you.
Read More From This Brand:
Don’t forget to follow us for more exclusive content right here on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!