Was this helpful?
Thumbs UP Thumbs Down

Elon Musk’s xAI hit by API key leak from government-linked DOGE staffer

stuttgart germany  07152023 person holding mobile phone with logo
doge an official website of the united states government displayed

A government staffer leaked xAI’s API keys online

In a concerning security lapse, a U.S. government employee working for DOGE (Department of Government Efficiency) accidentally exposed a private API key used for interacting with Elon Musk’s xAI models, including the infamous Grok chatbot.

The leak, discovered by security journalist Brian Krebs, revealed that Marko Elez had published sensitive access credentials on GitHub.

This single error potentially allowed unauthorized parties access to dozens of xAI’s advanced AI models, triggering serious cybersecurity alarms.

Grok AI app on a mobile screen and on a desktop on a blurry background

Sensitive xAI models became vulnerable after a leak

The leaked API key exposed access to at least 52 of xAI’s large language models, including the freshly deployed Grok 4. This model recently gained notoriety for generating controversial responses.

The API key, embedded in Elez’s “agent.py” script uploaded to GitHub, opened a direct pathway to some of xAI’s most advanced models.

Security researchers warn that such exposure could enable malicious use of the models, data theft, or manipulation of proprietary algorithms.

xai logo in holding mobile screen and xai website

An API leak happened despite earlier security warnings

This latest breach isn’t xAI’s first security slip. Earlier in 2025, another developer accidentally exposed a similar API key tied to xAI models used across Musk’s companies. That key remained active for two months, despite being flagged.

With this recurring pattern of credential leaks, cybersecurity experts argue that xAI’s incident response procedures and internal security protocols are seriously inadequate, raising significant concerns about the company’s handling of sensitive systems.

us department of homeland security

Government data access compounds the problem

Marko Elez’s role within the U.S. government makes this incident even more alarming. As a DOGE employee, he reportedly accessed databases at agencies such as Social Security, Homeland Security, and the Treasury Department.

Cybersecurity experts are questioning whether someone unable to protect private API keys should have access to personal data on millions of Americans. The leak spotlights troubling gaps in vetting and oversight for sensitive government roles.

Businesswoman working on computer with security breach.

GitGuardian flagged the leak before key revocation

The leak was initially discovered by GitGuardian, which detects exposed API keys and secrets on public code repositories. Security consultant Philippe Caturegli confirmed the key’s sensitivity and contacted Elez directly.

Despite the repository’s removal, xAI did not immediately revoke the exposed key, leaving a wide-open window of vulnerability. Experts argue this lag in response highlights operational failings in both xAI’s and DOGE’s security cultures.

miami  usa  05112019 us dept of defense website

Experts warn the leak could enable serious misuse

Security professionals warn that an exposed API key tied to dozens of advanced AI models isn’t just a technical glitch; it’s a severe national security risk. Malicious actors could propagate harmful content, exfiltrate sensitive data, or compromise proprietary algorithms.

Given that Grok’s models are now powering federal systems via a recent $200 million Department of Defense contract, the leak raises urgent concerns about AI model security in government environments.

Github logo is displayed on phone

The API key remained active even after removal

Though Elez removed the GitHub repository after the GitGuardian alert, Philippe Caturegli confirmed days later that the API key remained active.

The continued functionality of the key meant that anyone who copied it before its removal could still exploit it to access xAI’s models. Experts argue that this delayed revocation is a glaring example of poor credential management at xAI.

Man spectating security system

Previous misconduct by a staffer worsens concerns

Marko Elez’s history makes this incident even more controversial. The 25-year-old had previously resigned from DOGE after being linked to racist and pro-eugenics social media posts, only to be rehired within months under political pressure.

Since then, he gained privileged access across federal agencies. That someone with this track record was entrusted with both government systems and xAI access raises serious questions about recruitment, background checks, and cybersecurity oversight.

Ransomware cyber attack using malware, Security breach concept

DOGE’s reputation takes a serious hit

DOGE itself faces growing scrutiny. Tasked with streamlining U.S. government departments, the agency’s credibility is now questioned after multiple security lapses linked to its staffers.

Another DOGE employee reportedly leaked xAI’s internal API credentials just months earlier. With two major incidents in less than a year, DOGE’s operational security and expanding influence over federal infrastructure are now under intense scrutiny from cybersecurity watchdogs.

aerial photo of apple new campus under construction in cupetino

Link between government and Musk’s AI raises alarms

The incident exposes the increasingly blurry lines between Silicon Valley firms and government operations. Musk’s xAI models are now embedded in U.S. government systems, yet credential management seems disturbingly lax.

Experts argue that this leak illustrates the risks of outsourcing critical AI infrastructure to private companies without stringent security oversight. The fusion of public agencies and private AI firms may be creating unseen vulnerabilities at the national level.

xAI logo displayed on a phone

xAI’s security record is called into question

Critics argue that this leak reflects deeper flaws in xAI’s security culture. Having already suffered one API leak earlier in the year, xAI should have hardened its credential practices. Instead, a nearly identical incident occurred.

Philippe Caturegli described repeated leaks of sensitive API keys as evidence of systemic negligence rather than isolated accidents.

For a firm partnering with the Department of Defense, such lapses are raising eyebrows across the tech and defense communities.

hand holding the smartphone with display with xai symbol on

Grok’s reputation suffers yet another blow

This leak couldn’t have come at a worse time for Grok. Already reeling from controversies around antisemitic content and political bias, the chatbot is now at the center of a significant security breach.

With Grok’s models integrated into systems supporting federal agencies, its brand is increasingly synonymous with risk. For users and businesses alike, Grok’s growing list of scandals may soon outweigh its technological potential.

Cyber security shield digital protection concept a professional presents a

Security experts say the breach could escalate

While no immediate misuse of the leaked key has been reported, cybersecurity experts stress that the situation remains dangerous. Given the key’s continued operability post-removal, it’s possible that opportunistic attackers accessed xAI’s systems without detection.

Malicious actors could covertly exploit the credentials to generate disinformation, hijack models, or extract sensitive data, all while xAI remains unaware of the breach’s full scope.

male army soldier

Musk’s $200 million defense deal is now questioned

Just days before the leak, xAI signed a lucrative $200 million contract with the U.S. Department of Defense to integrate Grok into military and federal systems.

This breach and Grok’s behavioral scandals raise serious questions about whether xAI is ready to handle mission-critical infrastructure.

Critics argue that awarding defense contracts to companies plagued by repeated security failures is a gamble with national security.

stuttgart germany  07152023 person holding mobile phone with logo

Transparency issues exacerbate public distrust

Neither DOGE, xAI, nor Elez has issued public statements addressing the leak. The silence surrounding the breach is compounding public distrust.

Security experts argue that immediate disclosure and clear action steps are essential to restore confidence.

Instead, the lack of transparency suggests either a denial of the problem’s severity or an unwillingness to confront uncomfortable operational failures in the public and private sectors.

Worried this isn’t an isolated case? See how AT&T’s massive data breach put 86 million records at risk.

AI law concept icons over gravel.

Experts say broader reforms are urgently needed

The xAI API leak isn’t just a Musk problem; it reflects systemic cybersecurity failures at the intersection of private tech and federal agencies. Experts warn that without sweeping reforms, such incidents will continue to occur.

From tighter vetting at DOGE to enforced security audits at AI companies, leaders argue that stronger regulatory frameworks are urgently required. The growing reliance on AI infrastructure makes securing it not optional but essential.

Think that’s alarming? A recent breach leaked over a million two-factor codes. Get the full story here.

What do you think about xAI getting hit by an ex-employee and leaking confidential data? Please share your thoughts and drop a comment.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.