ChatGPT may give wrong or risky answers to drug questions, study warns

A recent study published in the European Journal of Hospital Pharmacy has found that ChatGPT, a popular artificial intelligence chatbot, may provide incorrect or incomplete answers to drug-related questions, posing a potential risk to patients and health-care workers who rely on it for information.

ChatGPT is a chatbot developed by OpenAI, a research organization that aims to create and promote beneficial artificial intelligence. ChatGPT uses a deep learning model called Generative Pre-trained Transformer (GPT) to generate natural language responses based on user input. ChatGPT can converse on various topics, such as sports, movies, music, and even software programming.

ChatGPT may give wrong or risky answers to drug questions, study warns
ChatGPT may give wrong or risky answers to drug questions, study warns

How did the study test ChatGPT?

The researchers from Heidelberg University Hospital in Germany collected 50 drug-related questions from real-world scenarios and entered them in ChatGPT. They then documented and rated the answers provided by the chatbot in terms of content, patient management, and risk. They also compared the answers with those obtained from reliable sources, such as drug databases, guidelines, and scientific literature.

What did the study find?

The study found that ChatGPT only gave correct and comprehensive answers to 13 out of 50 questions, or 26% of the time. The majority of the answers were either false (38%), incomplete or partially correct (36%), or irrelevant (9%). Moreover, the chatbot did not provide any references or sources to support its answers, making it difficult to verify the accuracy and reliability of the information.

The study also assessed the potential risk of patient harm associated with the chatbot’s answers. The researchers found that 26% of the answers posed a high risk of patient harm, 28% posed a low risk, and 46% posed no risk. The high-risk answers were those that could lead to serious adverse events, such as drug interactions, overdoses, allergic reactions, or contraindications. The low-risk answers were those that could cause minor or moderate adverse events, such as side effects, dosage errors, or inappropriate use of drugs. The no-risk answers were those that did not affect patient management or outcome.

In addition, the study tested the reproducibility of ChatGPT’s answers by entering the same questions at different time points. The researchers found that the chatbot gave different answers to the same questions over time, showing no or low reproducibility. This means that ChatGPT’s answers are not consistent or reliable, and may vary depending on the context or the mood of the chatbot.

What are the implications of the study?

The study suggests that ChatGPT is not a suitable tool for answering drug-related questions, as it may provide wrong or risky information that could harm patients or mislead health-care workers. The study also warns that ChatGPT’s answers may appear convincing or authoritative due to its polite language, detailed explanations, and text-book style, but these features do not guarantee the correctness or completeness of the information.

The study recommends that users of ChatGPT should be cautious and critical of the chatbot’s answers, and always verify them with reliable sources before making any clinical decisions. The study also calls for more research and development to improve the performance and safety of artificial intelligence applications in drug information.

Leave a Reply

Your email address will not be published. Required fields are marked *