Need advice on cancer treatment? Forget ChatGPT

Credit: D koi/Unsplash.

The Internet can serve as a powerful tool for self-education on medical topics.

With ChatGPT now available to patients, researchers at Brigham and Women’s Hospital wanted to assess how consistently the AI ​​chatbot provided cancer treatment recommendations that align with National Comprehensive Cancer Network guidelines.

The team’s findings, published in JAMA oncologyshow that ChatGPT gave an inappropriate — or “disagreeable” — recommendation a third of the time, underscoring the need to be aware of the limitations of the technology.

“Patients should feel able to educate themselves about their medical condition, but they should always speak to a doctor about it, and resources on the Internet should not be consulted in isolation,” said corresponding author Danielle Bitterman, radiation oncologist and associate professor at Harvard Medical School.

“ChatGPT responses can sound very human and be very persuasive. But when it comes to clinical decision making, there are so many nuances for each patient’s unique situation.

A correct answer can be very nuanced and not necessarily something that ChatGPT or any other major language model can provide.”

Although medical decision-making can be influenced by many factors, Bitterman and colleagues decided to evaluate the extent to which ChatGPT’s recommendations align with NCCN guidelines used by physicians across the country.

They focused on the three most common cancers (breast, prostate, and lung) and challenged ChatGPT to provide a treatment approach for each cancer based on disease severity.

In total, the researchers included 26 unique diagnostic descriptions and used four slightly different prompts to ask ChatGPT to provide a treatment approach, generating a total of 104 prompts.

Almost all responses (98 percent) included at least one treatment approach that met NCCN guidelines.

However, the researchers found that 34 percent of those responses also included one or more discordant recommendations that were sometimes difficult to spot in otherwise well-founded guidelines.

A treatment recommendation that is only partially correct was defined as a non-concordant treatment recommendation, for example, in the case of locally advanced breast cancer, a sole recommendation of an operation without mentioning another therapy modality.

Notably, full agreement was achieved in only 62 percent of cases, underscoring both the complexity of NCCN guidelines and the extent to which ChatGPT’s results can be vague or difficult to interpret.

In 12.5 percent of cases, ChatGPT resulted in “hallucinations” or a treatment recommendation, which was entirely absent from the NCCN guidelines. This included recommendations on advanced therapies or curative therapies for incurable cancers.

The authors emphasized that this form of misinformation can wrongly influence patients’ expectations of treatment and potentially impact the doctor-patient relationship.

In the future, the researchers will examine how well both patients and doctors can distinguish between medical advice written by a doctor and a large language model. They also plan to provide more detailed clinical cases to ChatGPT to further assess its clinical knowledge.

The authors used GPT-3.5-turbo-0301, one of the largest models available at the time the study was conducted, and the model class currently used in the open access version of ChatGPT (a newer version, GPT- 4). only available in paid subscription).

They also used the 2021 NCCN guidelines, as GPT-3.5-turbo-0301 was developed based on data through September 2021.

“It is an open research question to what extent LLMs provide consistent logical answers since ‘hallucinations’ are frequently observed,” said lead author Shan Chen of Brigham’s AI in Medicine Program.

“Users are likely to seek answers from the LLMs to learn about health-related topics – similar to Google searches. At the same time, we need to raise awareness that LLMs are not the equivalent of trained medical professionals.”

follow us on Twitter for more articles on this topic.

Written by BWH Communications.

Source: Harvard University.


Laura Coffey

Laura Coffey is a Worldtimetodays U.S. News Reporter based in Canada. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Laura Coffey joined Worldtimetodays in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing: LauraCoffey@worldtimetodays.com.

Related Articles

Back to top button