Elderly individual contracts unusual ailment following guidance from AI model ChatGPT
In a striking reminder of the potential dangers of relying on Artificial Intelligence (AI) for health-related advice, a 60-year-old man developed the rare condition of bromism after following the advice of an AI model, ChatGPT. The man, hospitalized due to his condition, refused water offered to him.
The case, reported by The Guardian, was published in the Annals of Internal Medicine and was authored by researchers from the University of Washington in Seattle. The man's symptoms of facial acne, excessive thirst, and insomnia led to the diagnosis of bromism.
The man had been consulting ChatGPT about removing salt from his diet and, inadvertently, ended up substituting sodium chloride with sodium bromide for three months. This decision, without appropriate health warnings or professional context, led to the development of bromide toxicity.
Key factors contributing to such adverse outcomes include AI hallucinations and inaccuracies, automation bias and over-trust, lack of clinical judgment and nuance, and inadequate safety warnings. ChatGPT can generate plausible but false or misleading information, leading to risks of inaccurate diagnoses or treatment suggestions. Users may over-rely on the AI's humanlike tone and readily accept its outputs as medically valid, sometimes foregoing professional advice or verification.
AI, including ChatGPT, does not intend to treat diseases. However, the case serves as a warning to be cautious of the advice given by Artificial Intelligence, especially in health-related matters. The authors warn that AI can promote "discontextualized information" and complicate diagnosis.
It is crucial to verify information from other reliable sources before making health decisions. Doctors should always try to find out where patients got their health-related information from. The use of AI, while it can be a bridge between scientists and the public, should be used with caution, especially in health-related matters.
Bromism, a syndrome that contributed to one in ten psychiatric hospitalizations in the 20th century, underscores the seriousness of these risks. The man arrived at the hospital saying he had been poisoned by a neighbour. The authors conducted their own search and found that the response from ChatGPT also included bromide with no specific information or warning about the dangers of consumption.
In conclusion, while AI tools may assist in healthcare, reliance without proper clinical oversight and fact-checking poses significant risks that can translate into harmful health decisions and outcomes like bromism and neuropsychiatric complications. The case highlights that AI is not a substitute for professional medical evaluation and carries hidden risks when used for self-guided health decision-making.
- The case of a 60-year-old man developing bromism after following AI advice highlights the potential dangers of relying on AI like ChatGPT for health-and-wellness information, particularly mental-health issues and therapies-and-treatments.
- The unintended consequences of AI, such as AI hallucinations and inaccuracies, automation bias, and lack of clinical judgment, can lead to serious health issues like bromism.
- As AI, including ChatGPT, does not intend to treat diseases, it is essential to verify information from reliable sources like general-news outlets or experts in science and technology before making health-related decisions.
- Inevitably, the use of AI should be approached with caution, particularly in health-related matters, as it can promote "discontextualized information," complicate diagnoses, and pose hidden risks in self-guided health decision-making.