Was this helpful?
Thumbs UP Thumbs Down

ChatGPT’s diet advice left a man with psychiatric problems

AI risks and warnings hologram.
closeup of laptop displaying chatgpt screen represents ai integration and

ChatGPT’s risky advice

A 60-year-old man thought he’d found a simple way to improve his diet. Just a few prompts to ChatGPT, and he had an alternative to table salt.

It all seemed trustworthy until a few weeks later, he landed in the hospital. The real culprit was AI-generated advice. The advice sounded smart but turned out to be dangerously wrong.

A prompt engineer using a laptop.

AI diet advice goes wrong

According to a report published in the Annals of Internal Medicine, the man wanted to cut “chloride” from his diet for health reasons. ChatGPT suggested sodium bromide as an alternative to table salt. He took the idea literally in his diet.

He bought sodium bromide online and used it at home for three months. But following the AI’s advice started a chain of dangerous health events.

AI hallucination displayed on a phone.

AI advice turned deadly

Sodium bromide is easy to buy online, but ingesting it in large amounts is toxic. It can cause hallucinations, lethargy, fatigue, and a serious electrolyte imbalance. The man ingested bromide, thinking it was safe because AI said so.

“What may have seemed harmless online had real and dangerous consequences. Easy access to AI-generated advice led directly to life-threatening outcomes.

OpenAI logo displayed on a laptop.

AI can be correct but misleading

AI suggested bromide instead of chloride. It didn’t ask why the man wanted it or warn him about possible health risks. See, this is the problem; AI knows so much that people start trusting it more than real humans, but its advice can still be wrong.

A real medical expert probably wouldn’t recommend sodium bromide. As AI becomes more common, it’s important to know its uses and shortcomings.

Risk alert concept

AI can’t assess real-world risks

AI systems draw from extensive training data across domains, but lack lived experience and may not adequately assess real‑world risks or contexts. AI doesn’t check why you’re asking or think about potential risks. It takes questions and answers them.

AI can’t act like a doctor who considers context and safety. Its knowledge is huge, but its advice can be risky because it can’t judge real-world situations. That’s why we still need to apply our own judgment.

Human intelligence vs artificial intelligence

AI cannot replace experts

The man went ahead with AI’s advice, and things went downhill fast. According to the report, he started feeling paranoid, seeing things that weren’t even there. He ended up in the hospital. What started as a simple AI suggestion turned into a full-blown medical crisis.

This case shows that AI can’t replace expert judgment, especially when health or high-risk decisions are involved. When it comes to health or risky choices, people need to understand the advice and make their own decisions. Humans should always stay in control.

Background of chatgpt with the people working on computers shown.

When AI advice backfires

This began as a medical case about bromide toxicity, but it’s also a tech warning. AI plus easy online access to chemicals created a risky life situation.

The story shows that AI mistakes aren’t just technical but also risky. They can have real-world consequences when people follow advice blindly.

Chatgpt chatbot concept

The cost of bad prompts

AI advice is only as good as the questions you ask. A vague or unclear prompt can lead to dangerous suggestions. The man asked about chloride, but AI didn’t clarify his intent or risks.

Poorly worded questions can send you down the wrong path fast. With AI, the input matters just as much as the output.

AI risks and warnings hologram.

Tech made mistakes easy

The sodium bromide was easy to get online, and AI made it look harmless. Together, that’s a recipe for trouble.

People might try things without realizing the risks. Tech and the internet make it easier to make mistakes, especially when AI advice is taken as fact.

Critical Thinking written on a note pad

Think critically over AI advice

AI is a tool, not a magic solution. People still need to think critically to avoid mistakes. Problems might not show up right away, but they can escalate fast.

Always double-check its suggestions with reliable sources, and don’t rely on it for high-risk decisions. Make sure to consult experts for health, safety, or financial decisions. Learning how to use its suggestions safely can stop mistakes.

Robotic hand assisting person for signing document over reflective desk

Regulations catching up slow

AI tools are racing ahead, but the rules aren’t keeping pace. Governments and regulators are still debating the basics, while people already use AI for health, money, and safety decisions.

That gap leaves users exposed. Tech moves fast, laws move slow, and until they meet in the middle, people are essentially experimenting with tools that don’t have clear boundaries.

The concept answers to the questions.

Question unusual AI advice

The AI didn’t warn about bromide toxicity or ask about his health or diet. AI can sound confident even when it’s wrong. Unlike a human, it almost never says ‘I don’t know.’ Instead, it always comes up with an answer, even if it’s not accurate. That’s why it’s important to question unusual advice.

People keep saying AI is the future, but even advanced systems can make huge mistakes. It doesn’t check sources or consider safety. Misinformation spreads quickly when people assume AI is always right.

Discussion on AI ethics

AI ethics beyond health

AI ethics isn’t just about medicine. The same risks exist in finance, law, and education. An AI tool giving wrong legal or financial advice can cause huge damage.

Ethical guardrails need to stretch across all industries, not just healthcare, because bad advice anywhere can spiral fast.

Happy boy and AI robot giving a high five

Companies need to be more responsible

Companies building AI need stronger guardrails. That means flagging dangerous prompts, blocking unsafe recommendations, and being transparent when the system isn’t sure.

A smarter design can stop AI from suggesting harmful advice in the first place. Safer AI isn’t just about better answers; it’s about knowing when not to answer at all.

Robot and human fingers about to touch

AI training is important

AI can be useful for learning new skills, but unchecked advice from AI can lead to serious problems. The tech industry needs to provide clear guidelines so AI helps people instead of causing harm.

While AI is cutting jobs, knowing how it works is just as important as building it. Otherwise, it can do more damage than good. Used wisely, even simple ChatGPT tricks can help you learn and grow safely.

Curious how AI is reshaping the tech world? See how AI is cutting tech jobs but raising salaries by $18K in other fields.

Man thinking while using phone.

Trust your own judgement

AI can be a powerful tool, but it’s not a replacement for human judgment. Always combine AI advice with expert guidance, double-check facts, and question anything that seems risky or unusual.

Think of AI as a helper, not a decision-maker. Use it to generate ideas, explore possibilities, or speed up research, but never rely on it alone. The best decisions come from combining AI tools with your own careful thinking and judgment.

Want to understand the limits of AI? See how smart AI acting human could backfire on all of us.

Stay curious, question advice, and if you found this useful, give it a like, drop your thoughts in the comments.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.