- Path:
Periodical
- Title:
- Clinical chemistry and laboratory medicine
- Publication:
-
Berlin [u.a.]: De Gruyter
- Note:
- Gesehen am 22.6.2022
- Archivierung/Langzeitarchivierung gewährleistet
- 82+C!EBSCO-Split(17-11-09)
- Scope:
- Online-Ressource
- ISSN:
- 1437-4331
- ZDB-ID:
-
1492732-9
- Previous Title:
- European journal of clinical chemistry and clinical biochemistry
- Keywords:
- Zeitschrift
- Classification:
- Naturwissenschaften
- Medizin
- DDC Group:
- 610 Medizin
- 540 Chemie
- Collection:
- Naturwissenschaften
- Medizin
- Copyright:
- Rights reserved
- Accessibility:
- Eingeschränkter Zugang mit Nutzungsbeschränkungen
Article
- Title:
- Comparison of ChatGPT, Gemini, and Le Chat with physician interpretations of medical laboratory questions from an online health forum
- Publication:
-
Berlin [u.a.]: De Gruyter, 2024
- Language:
- English
- Information:
- Objectives: Laboratory medical reports are often not intuitively comprehensible to non-medical professionals. Given their recent advancements, easier accessibility and remarkable performance on medical licensing exams, patients are therefore likely to turn to artificial intelligence-based chatbots to understand their laboratory results. However, empirical studies assessing the efficacy of these chatbots in responding to real-life patient queries regarding laboratory medicine are scarce. Methods: Thus, this investigation included 100 patient inquiries from an online health forum, specifically addressing Complete Blood Count interpretation. The aim was to evaluate the proficiency of three artificial intelligence-based chatbots (ChatGPT, Gemini and Le Chat) against the online responses from certified physicians. Results: The findings revealed that the chatbots’ interpretations of laboratory results were inferior to those from online medical professionals. While the chatbots exhibited a higher degree of empathetic communication, they frequently produced erroneous or overly generalized responses to complex patient questions. The appropriateness of chatbot responses ranged from 51 to 64 %, with 22 to 33 % of responses overestimating patient conditions. A notable positive aspect was the chatbots’ consistent inclusion of disclaimers regarding its non-medical nature and recommendations to seek professional medical advice. Conclusions: The chatbots’ interpretations of laboratory results from real patient queries highlight a dangerous dichotomy – a perceived trustworthiness potentially obscuring factual inaccuracies. Given the growing inclination towards self-diagnosis using AI platforms, further research and improvement of these chatbots is imperative to increase patients’ awareness and avoid future burdens on the healthcare system.
- Scope:
- Online-Ressource
- Note:
- Open Access
- Archivierung/Langzeitarchivierung gewährleistet
- Keywords:
- ChatGPT ; chatbot ; AI ; laboratory results ; health forum
- Classification:
- Naturwissenschaften
- Medizin
- Sonstiges
- Collection:
- Naturwissenschaften
- Medizin
- Sonstiges
- Copyright:
- CC BY
- Accessibility:
- Free Access