According to a study, Free ChatGPT May Wrongly Answer Drug Queries

ChatGPT
Image used for information purpose only. Picture Credit: (cbs17.com)

A recent study conducted by pharmacists at Long Island University raises concerns about the accuracy and reliability of the free version of ChatGPT, OpenAI’s language model, when it comes to answering drug-related questions. The study found that out of 39 questions posed to ChatGPT, only 10 responses were deemed satisfactory, with the remaining 29 either not directly addressing the question, being inaccurate, incomplete, or both.

The research suggests that patients and healthcare professionals should exercise caution and verify any drug-related information obtained from ChatGPT with trusted sources. The study’s lead author, Sara Grossman, emphasized the importance of cross-referencing information with reliable sources such as doctors or government-based medication information websites. The study acknowledges that ChatGPT’s free version uses data sets only up to September 2021, potentially lacking information on recent developments in the medical landscape.

Although the paid versions of ChatGPT use real-time internet browsing, it remains unclear how accurately they can answer medication-related queries. The study focused on the free version to replicate what the general population typically uses. While there’s a possibility that a paid version might yield better results, the research aimed to assess the version accessible to the broader public. It’s important to note that the study represents a snapshot of ChatGPT’s performance in early 2023, and improvements may have been made since then.

Overall, the findings underscore the need for users to exercise caution and independently verify critical health-related information provided by AI models like ChatGPT.

Read More: Click Here