Study Shows Parents Trust ChatGPT's Medical Advice Over Doctors'
Study Shows Parents Trust ChatGPT's Medical Advice Over Doctors'
A recent study from the University of Kansas (KU) Life Span Institute (KS, U.S.) has found that parents trust health recommendations provided by ChatGPT more than those from human doctors. The research, published in the Journal of Pediatric Psychology, found that parents perceived children’s healthcare advice generated by OpenAI’s (CA, U.S.) chatbot as moral, trustworthy and accurate.
Study Methodology and Results
The objective of the KU study was to determine whether health-oriented text created by ChatGPT under expert guidance could match the persuasiveness and credibility of human clinicians from a parent’s point of view.
To achieve this, researchers asked more than 100 parents between 18 and 65 years old to complete an initial assessment of behavioral intentions for decisions regarding their children’s health. Following this, they were requested to rate written material generated by ChatGPT and human clinicians.
The study began shortly after the launch of ChatGPT, when researchers were doubtful about how this new AI tool would be received by users. As lead research author Calissa Leslie-Miller noted:
“We had concerns about how parents would use this new, easy method to gather health information for their children. Parents often turn to the internet for advice, so we wanted to understand what using ChatGPT would look like and what we should be worried about.”
Despite initial uncertainty, the results of the research suggested that ChatGPT can impact behavioral intentions around medication, sleep, and diet decisions. Participants only noticed minor differences between AI and human-generated text in terms of morality, trustworthiness, expertise, accuracy, and reliance.
Interestingly, when distinctions were detected, ChatGPT’s text was rated as superior in accuracy and trustworthiness, with parents affirming they would be more inclined to trust AI-generated information rather than human experts.
Potential Drawbacks
The study’s results suggest AI’s promising potential to assist clinicians in drafting health-related texts and delivering reliable and accurate medical advice.
These findings are not surprising, as AI has been playing a significant role in recent advances in different healthcare fields, including early breast cancer detection, fetal health prediction and personalized antibiotic treatment.
However, while the prospects may be encouraging, healthcare professionals should remain mindful of the challenges associated with using AI for medical purposes, especially those linked to patient data security, biases and questionable device reliability.
In the case of the KU study, Leslie-Miller stressed the need for caution in terms of reliance on AI-generated medical information, as the chatbot may “work well in many cases” but “isn’t an expert.” She added that AI users should uniquely trust recommendations that are "consistent with expertise that comes from a non-generative AI source." Leslie-Miller further stipulates:
"We're concerned that people may increasingly rely on AI for health advice without proper expert oversight. In children's health, where the consequences can be significant, it's crucial that we address this issue."
.avif)
.avif)
.png)