7 min read
7 min read

In a concerning security lapse, a U.S. government employee working for DOGE (Department of Government Efficiency) accidentally exposed a private API key used for interacting with Elon Musk’s xAI models, including the infamous Grok chatbot.
The leak, discovered by security journalist Brian Krebs, revealed that Marko Elez had published sensitive access credentials on GitHub.
This single error potentially allowed unauthorized parties access to dozens of xAI’s advanced AI models, triggering serious cybersecurity alarms.

The leaked API key exposed access to at least 52 of xAI’s large language models, including the freshly deployed Grok 4. This model recently gained notoriety for generating controversial responses.
The API key, embedded in Elez’s “agent.py” script uploaded to GitHub, opened a direct pathway to some of xAI’s most advanced models.
Security researchers warn that such exposure could enable malicious use of the models, data theft, or manipulation of proprietary algorithms.

This latest breach isn’t xAI’s first security slip. Earlier in 2025, another developer accidentally exposed a similar API key tied to xAI models used across Musk’s companies. That key remained active for two months, despite being flagged.
With this recurring pattern of credential leaks, cybersecurity experts argue that xAI’s incident response procedures and internal security protocols are seriously inadequate, raising significant concerns about the company’s handling of sensitive systems.

Marko Elez’s role within the U.S. government makes this incident even more alarming. As a DOGE employee, he reportedly accessed databases at agencies such as Social Security, Homeland Security, and the Treasury Department.
Cybersecurity experts are questioning whether someone unable to protect private API keys should have access to personal data on millions of Americans. The leak spotlights troubling gaps in vetting and oversight for sensitive government roles.

The leak was initially discovered by GitGuardian, which detects exposed API keys and secrets on public code repositories. Security consultant Philippe Caturegli confirmed the key’s sensitivity and contacted Elez directly.
Despite the repository’s removal, xAI did not immediately revoke the exposed key, leaving a wide-open window of vulnerability. Experts argue this lag in response highlights operational failings in both xAI’s and DOGE’s security cultures.

Security professionals warn that an exposed API key tied to dozens of advanced AI models isn’t just a technical glitch; it’s a severe national security risk. Malicious actors could propagate harmful content, exfiltrate sensitive data, or compromise proprietary algorithms.
Given that Grok’s models are now powering federal systems via a recent $200 million Department of Defense contract, the leak raises urgent concerns about AI model security in government environments.

Though Elez removed the GitHub repository after the GitGuardian alert, Philippe Caturegli confirmed days later that the API key remained active.
The continued functionality of the key meant that anyone who copied it before its removal could still exploit it to access xAI’s models. Experts argue that this delayed revocation is a glaring example of poor credential management at xAI.

Marko Elez’s history makes this incident even more controversial. The 25-year-old had previously resigned from DOGE after being linked to racist and pro-eugenics social media posts, only to be rehired within months under political pressure.
Since then, he gained privileged access across federal agencies. That someone with this track record was entrusted with both government systems and xAI access raises serious questions about recruitment, background checks, and cybersecurity oversight.

DOGE itself faces growing scrutiny. Tasked with streamlining U.S. government departments, the agency’s credibility is now questioned after multiple security lapses linked to its staffers.
Another DOGE employee reportedly leaked xAI’s internal API credentials just months earlier. With two major incidents in less than a year, DOGE’s operational security and expanding influence over federal infrastructure are now under intense scrutiny from cybersecurity watchdogs.

The incident exposes the increasingly blurry lines between Silicon Valley firms and government operations. Musk’s xAI models are now embedded in U.S. government systems, yet credential management seems disturbingly lax.
Experts argue that this leak illustrates the risks of outsourcing critical AI infrastructure to private companies without stringent security oversight. The fusion of public agencies and private AI firms may be creating unseen vulnerabilities at the national level.

Critics argue that this leak reflects deeper flaws in xAI’s security culture. Having already suffered one API leak earlier in the year, xAI should have hardened its credential practices. Instead, a nearly identical incident occurred.
Philippe Caturegli described repeated leaks of sensitive API keys as evidence of systemic negligence rather than isolated accidents.
For a firm partnering with the Department of Defense, such lapses are raising eyebrows across the tech and defense communities.

This leak couldn’t have come at a worse time for Grok. Already reeling from controversies around antisemitic content and political bias, the chatbot is now at the center of a significant security breach.
With Grok’s models integrated into systems supporting federal agencies, its brand is increasingly synonymous with risk. For users and businesses alike, Grok’s growing list of scandals may soon outweigh its technological potential.

While no immediate misuse of the leaked key has been reported, cybersecurity experts stress that the situation remains dangerous. Given the key’s continued operability post-removal, it’s possible that opportunistic attackers accessed xAI’s systems without detection.
Malicious actors could covertly exploit the credentials to generate disinformation, hijack models, or extract sensitive data, all while xAI remains unaware of the breach’s full scope.

Just days before the leak, xAI signed a lucrative $200 million contract with the U.S. Department of Defense to integrate Grok into military and federal systems.
This breach and Grok’s behavioral scandals raise serious questions about whether xAI is ready to handle mission-critical infrastructure.
Critics argue that awarding defense contracts to companies plagued by repeated security failures is a gamble with national security.

Neither DOGE, xAI, nor Elez has issued public statements addressing the leak. The silence surrounding the breach is compounding public distrust.
Security experts argue that immediate disclosure and clear action steps are essential to restore confidence.
Instead, the lack of transparency suggests either a denial of the problem’s severity or an unwillingness to confront uncomfortable operational failures in the public and private sectors.
Worried this isn’t an isolated case? See how AT&T’s massive data breach put 86 million records at risk.
The xAI API leak isn’t just a Musk problem; it reflects systemic cybersecurity failures at the intersection of private tech and federal agencies. Experts warn that without sweeping reforms, such incidents will continue to occur.
From tighter vetting at DOGE to enforced security audits at AI companies, leaders argue that stronger regulatory frameworks are urgently required. The growing reliance on AI infrastructure makes securing it not optional but essential.
Think that’s alarming? A recent breach leaked over a million two-factor codes. Get the full story here.
What do you think about xAI getting hit by an ex-employee and leaking confidential data? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!