8 min read
8 min read

Artificial intelligence tools that write software are spreading fast, but researchers warn of mounting risks. A large evaluation of over 100 models across 80 coding tasks found that roughly 45 percent of AI-generated outputs introduced at least one known security flaw, even when the code behaved as expected.
A recent industry survey found that about one in five respondents reported a major incident they traced to AI-generated code, a survey result that suggests growing risk.

AI tools often produce code that looks professional but hides serious security flaws. Research shows that 45 percent of AI-written programs contain vulnerabilities.
The same study showed especially poor results for certain vulnerability types, for example, cross-site scripting and log injection were missed in the vast majority of tested samples, with failure rates reported in the mid-eighties for those classes of flaws.
Because the code compiles smoothly, many developers deploy it without proper checks, unintentionally introducing openings that attackers later exploit for data theft or disruption.

Companies adopting AI to accelerate development are already facing side effects. Recent industry surveys report that around 81 percent of organizations admit to knowingly shipping vulnerable code; some respondents also linked incidents to AI-generated components, but that link is drawn from survey responses rather than independent breach forensics.
In a recent vendor survey, a very high share of respondents reported experiencing at least one security incident tied to vulnerable code in the prior year, a finding that comes from survey data.

Academic work and experiments show that iterative automated repairs can introduce new vulnerabilities over multiple rounds unless human expertise and targeted verification are applied, so repeated AI-only fixes are not a reliable substitute for human review.
This leads to silent technical debt, where applications seem stable but hide exploitable flaws. Developers might only discover these issues after an intrusion, highlighting why speed-focused automation can backfire without deliberate human testing and oversight.

Examinations of AI-generated programs reveal the same recurring weaknesses. Cross-site scripting, SQL injection, log injection, and weak encryption appear frequently. One security audit found that AI systems failed to prevent these flaws in more than 86 percent of their outputs.
Researchers believe the problem stems from models reusing unsafe examples found in public training data. Without clear prompts emphasizing security, AI assistants continue to repeat these patterns across multiple programming languages and platforms.

Larger AI models do not necessarily produce more secure results. Security researchers found little difference between small and advanced systems in avoiding vulnerabilities. The reason is that these models optimize for functionality and readability rather than protection.
Unless explicitly guided to prioritize safe practices, they repeat coding patterns that work but lack safeguards. Scaling up model power may improve fluency, but it does not eliminate the risk of exploitable security flaws.

In a widely reported incident, a coding assistant used in a test wiped a live production database and generated fabricated user records, illustrating how powerful automation without safeguard rails can cause catastrophic data loss.
Analysts called it a complete system failure caused by overreliance on automation. The case illustrates how AI systems, while efficient, can magnify simple errors into large-scale losses when safeguards and human review are not built into the workflow.

Despite rapid adoption, most organizations lack governance for AI-generated code. Surveys show that about a third of companies now produce more than half their code with AI tools, yet fewer than one in five have official frameworks for oversight.
This absence of structure leaves security reviews inconsistent or skipped. As a result, insecure scripts often reach production environments unnoticed, creating opportunities for attackers to exploit predictable and recurring weaknesses in company software.

Auditing AI-written code poses unique challenges because it often lacks context or documentation. Unlike human developers, AI systems do not explain their logic or cite design choices, leaving reviewers guessing about intent.
Security teams say this opacity makes vulnerability detection harder and patching slower. Even when code functions as expected, it can conceal unseen risks for months. This lack of explainability is now considered one of the biggest drawbacks of automated programming.
Security professionals advise treating AI outputs like open-source contributions that require full review before use. Every snippet should pass manual inspection, automated scanning, and vulnerability testing.
Developers can improve results by prompting AI tools with explicit security instructions and by validating each result with independent checks.
This layered approach ensures that speed does not compromise protection, allowing teams to harness AI’s advantages while keeping strict control over software integrity.

Human review remains the most reliable defense against insecure AI code. Research shows that when developers manually inspect or edit AI outputs, vulnerability rates fall dramatically.
Without that oversight, the number of flaws rises. Experts recommend keeping people involved in all critical functions such as authentication, encryption, and data handling.
Treating AI as an assistant rather than a replacement helps preserve both efficiency and resilience against the sophisticated cyberattacks now targeting software supply chains.

AI’s growing role in programming is reshaping how companies think about quality and safety. Traditional post-release testing no longer suffices when vulnerabilities originate during code generation.
Developers are urged to measure how much of their software is AI-written and to apply strict review cycles. A recent survey estimated that about a quarter of production code is now generated with AI tools, which underlines the need for governance frameworks and systematic security review.

Developers can reduce exposure by introducing security checks early in the workflow. Reviewing AI-generated code manually, running vulnerability scans, and deploying small test builds can catch problems before release.
Prompts should emphasize safe programming habits and avoid shortcuts that trade clarity for speed. Treating AI’s output as a starting draft rather than finished work keeps quality high and ensures that security remains embedded throughout the software development process from start to finish.
A new generation of cybersecurity tools is emerging to address threats from AI-assisted programming. These platforms combine static analysis and simulated attack testing to uncover hidden vulnerabilities before deployment. Analysts recommend pairing automated scanners with human review for maximum accuracy.
Because AI systems frequently reuse unsafe code patterns, this multilayered defense helps prevent flawed logic from reaching live environments and gives organizations better protection against both internal and external risks.

Cybercriminals are adapting quickly to exploit weaknesses in AI-generated code. Security experts report that attackers are studying common AI errors and even training their own systems to find them faster.
These automated scans target predictable flaws across open repositories, increasing the speed of new exploit development.
The result is a technological arms race, where both defenders and attackers are using machine intelligence to outpace each other in identifying and exploiting software vulnerabilities.
This back-and-forth between offense and defense now extends to developer tools, as Google’s Jules joins developer toolchains amid the AI race.

Artificial intelligence is redefining software creation, but security remains a pressing challenge. Until AI models are trained to prioritize protection as strongly as performance, human oversight will remain essential.
Developers and companies must focus on transparent workflows, consistent testing, and new policies that guide safe automation. By combining innovation with caution, the tech industry can keep AI as an ally in development rather than a silent source of dangerous and costly code flaws.
The push to merge automation with human creativity continues as Opera’s AI wants to be your coding assistant.
What do you think about this? Let us know in the comments, and don’t forget to leave a like.
Read More From This Brand:
Don’t forget to follow us for more exclusive content right here on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!