• 07 Mar, 2026

AIIMS New Delhi doctors have issued a serious warning against using AI tools like ChatGPT for medical diagnosis or treatment after a patient suffered severe internal bleeding by following chatbot generated advice for back pain. The case highlights the dangers of self medication, AI hallucinations, and why real medical consultation, tests, and clinical judgement cannot be replaced by algorithms.

AIIMS Doctors Warn Against Using AI for Medical Treatment After Shocking Incident

Doctors at AIIMS New Delhi have issued a strong and timely warning against using artificial intelligence tools like ChatGPT for medical diagnosis or treatment, following a serious incident in which a patient developed severe internal bleeding after acting on advice generated by a chatbot. The warning was issued by Dr Uma Kumar, Head of the Rheumatology Department at AIIMS, who spoke to the media about the case. The incident has raised serious concerns about the growing tendency of people to treat AI chatbots as substitutes for real doctors.

How a Simple Search Turned Into a Medical Emergency

According to doctors, the patient had been suffering from persistent back pain and instead of visiting a clinic, decided to seek advice online. The patient turned to ChatGPT and asked what could be done for the pain. The chatbot suggested commonly used painkillers, advice that sounded routine and harmless. Trusting this response, the patient purchased non steroidal anti inflammatory drugs from a pharmacy and began taking them without any medical consultation, blood tests, or clinical examination.

When Self Medication Led to Severe Internal Bleeding

Soon after starting the medication, the patient developed severe internal bleeding, a dangerous complication that could have been avoided with proper medical supervision. While painkillers are widely used and often considered safe by the public, they are well known to cause stomach ulcers, gastrointestinal bleeding, and other serious complications in certain individuals. What the AI tool could not judge was whether this particular patient had a higher risk of bleeding, underlying stomach problems, or other medical conditions that made these drugs unsafe.

Why AI Advice Can Be Dangerous in Real Patients

In this case, the advice given by the chatbot may have sounded reasonable and familiar, because many people do take similar medicines for back pain. However, medicine is never just about the disease, it is about the patient. Without blood tests, scans, or even a basic clinical assessment, any treatment advice becomes guesswork. AI tools cannot examine a patient, cannot look for warning signs, and cannot evaluate hidden risks that may turn a routine medicine into a life threatening one.

The Growing Illusion of Chatbots as Virtual Doctors

This episode has set off alarm bells among doctors, especially at a time when artificial intelligence feels like an instant solution to almost every problem. With answers delivered in seconds and written in confident, reassuring language, chatbots are increasingly being treated as virtual doctors. Medical professionals warn that this confidence can be dangerously misleading, because it creates a false sense of safety and certainty in situations where careful evaluation is actually needed.

Why Real Medicine Cannot Be Replaced by Algorithms

Dr Uma Kumar explained that medicine is not about giving quick fixes. Doctors follow a careful and structured process where they consider the patient’s history, examine the body, and order investigations before deciding on a treatment plan. A chatbot does not know the person sitting on the other side of the screen. It cannot assess past illnesses, current medications, hidden risks, or how a particular drug might behave in that specific body.

The Problem of AI Hallucinations in Healthcare

Doctors are also concerned about what are known as “AI hallucinations”, situations where chatbots provide answers that sound authoritative and convincing but may be incomplete, inaccurate, or unsuitable for a real patient. While platforms like OpenAI clearly state that their tools are not meant to replace doctors, many users tend to ignore these warnings, especially when they are in pain, anxious, or looking for quick relief.

The Need for Public Awareness and Better Regulation

The incident has now triggered a larger conversation about public awareness and the need for better regulation. AIIMS doctors are urging people to use the internet and AI tools only for general information and education, not as substitutes for medical consultation. They are also reminding the public that even medicines available without a prescription can cause serious harm if taken without proper guidance.

As artificial intelligence becomes more deeply woven into everyday life, doctors say the responsibility lies with both technology companies and users. While technology can support healthcare in many ways, it cannot replace clinical judgement, physical examination, and individualized decision making. When it comes to health, convenience should never come at the cost of safety, and a human doctor, with questions, tests, and careful judgement, remains truly irreplaceable.

Dr. Dheeraj Maheshwari

MBBS, PGDCMF (MNLU), MD (Forensic Medicine)