Original Article

Philosophy of medicine meets AI hallucination and AI drift: moving toward a more gentle medicine

Abstract

The contemporary world is profoundly shaped by technological progress. Among the advancements of our era is the proliferation of artificial intelligence (AI). AI has permeated every facet of human knowledge, including medicine. One domain of AI development is the application of large language models (LLMs) in health-care settings. While these applications hold immense promise, they are not without challenges. Two notable phenomena, AI hallucination and AI drift, pose setbacks. AI hallucination refers to the generation of erroneous information by AI systems, while AI drift is the production of multiple responses to a single query. The emergence of these challenges underscores the crucial role of the philosophy of medicine. By reminding practitioners of the inherent uncertainty that underpins medical interventions, the philosophy of medicine fosters a more receptive stance toward these technological advancements. Furthermore, by acknowledging the inherent fallibility of these technologies, the philosophy of medicine reinforces the importance of gentle medicine and humility in clinical practice. Physicians must not shy away from embracing AI tools due to their imperfections. Acknowledgment of uncertainty fosters a more accepting attitude toward AI tools among physicians, and by constantly highlighting the imperfections, the philosophy of medicine cultivates a deeper sense of humility among practitioners. It is imperative that experts in the philosophy of medicine engage in thoughtful deliberation to ensure that these powerful technologies are harnessed responsibly and ethically, preventing the reins of medical decision-making from falling into the hands of those without the requisite expertise and ethical grounding.

Adamopoulou E, Moussiades L. An overview of chatbot technology. IFIP Int Conf Artif Intell Appl Innov. 2020.

Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023;15(2). doi:10.7759/cureus.35179.

Aydin Ö. Google Bard generated literature review: metaverse. J AI. 2023;7(1):1-14.

Beutel G, Geerits E, Kielstein JT. Artificial hallucination: GPT on LSD? Crit Care. 2023;27(1):148.

Bond RR, Novotny T, Andrsova I, Koc L, Sisakova M, Finlay D, et al. Automation bias in medicine: the influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J Electrocardiol. 2018;51(6):S6-S11.

Cambria E, Mao R, Chen M, Wang Z, Ho SB. Seven pillars for the future of AI. IEEE Intell Syst. 2023;38(6).

Chandon G, Virgil. Stories from the Aeneid. World Pub Co; 1965.

Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595.

Kakoliris G. The "Undecidable" Pharmakon: Derrida's Reading of Plato's Phaedrus. In: The New Yearbook for Phenomenology and Phenomenological Philosophy. Routledge; 2015. p. 231-42.

Lee H. The rise of ChatGPT: Exploring its potential in medical education. Anat Sci Educ. 2023.

Marcum JA. An introductory philosophy of medicine: Humanizing modern medicine. Vol. 99. Springer Science & Business Media; 2008.

Morreel S, Verhoeven V, Mathysen D. Microsoft Bing outperforms five other generative artificial intelligence chatbots in the Antwerp University multiple choice medical license exam. medRxiv. 2023:2023.08.18.23294263.

Possati LM. Psychoanalyzing artificial intelligence: The case of Replika. AI Soc. 2023;38(4):1725-38.

Ray PP. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber Phys Syst. 2023.

Reshamwala A, Mishra D, Pawar P. Review on natural language processing. IRACST Eng Sci Technol Int J. 2013;3(1):113-6.

Rysavy M. Evidence-based medicine: a science of uncertainty and an art of probability. AMA J Ethics. 2013;15(1):4-8.

Stegenga J. Medical nihilism. 1st ed. Oxford University Press; 2018.

Sukel M, Rudinac S, Worring M. GIGO, Garbage In, Garbage Out: An Urban Garbage Classification Dataset. Int Conf Multimed Model. 2023.

Thapa S, Adhikari S. ChatGPT, bard, and large language models for biomedical research: opportunities and pitfalls. Ann Biomed Eng. 2023:1-5.

Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930-40.

Files
IssueVol 18 (2025) QRcode
SectionOriginal Article(s)
Keywords
AI hallucination; AI drift; Data schizophrenia; Medical philosophy; Large language models; Gentle medicine.

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
How to Cite
1.
Radfar MM, Namazi H. Philosophy of medicine meets AI hallucination and AI drift: moving toward a more gentle medicine. J Med Ethics Hist Med. 2025;18.