Today: May 05, 2024
Today: May 05, 2024

Blindly trusting Dr. ChatGPT? How AI's miracle 'cures' could kill

Share This
LA Post: Blindly trusting Dr. ChatGPT? How AI's miracle 'cures' could kill
April 25, 2024
Sirisha -

Doctors are sounding the alarm about the potential dangers of AI chatbots in healthcare as people increasingly turn to artificial intelligence-powered chatbots for medical advice. As advanced conversational AI assistants like OpenAI's ChatGPT and Google's Gemini become more ubiquitous, health experts worry these human-like bots could provide inaccurate information that leads patients to pursue unnecessary tests, bogus treatments, or simply ignore sound medical guidance from professionals altogether. With the chatbots fostering an undue level of trust through their fluent conversational abilities, some physicians believe regulatory guardrails may be needed to ensure these AI helpers in healthcare rely only on trustworthy, evidence-based health data sources when operating in this critical domain.

"ChatGPT and similar AI bots have a veneer of human confidence that engenders a greater degree of trust or credibility, which I think is frequently pretty misplaced," cautioned Dr. Joshua Tamayo-Sarver, a consultant on medical innovation. While increased access to health information enabled by the technology offers clear benefits, doctors fear patients who are convinced they understand their condition better than professionals could start demanding unneeded services, wearing down care providers with adamant refusals to follow recommendations, or attempting to self-medicate with unproven amateur remedies based solely on chatbot counsel. So far, authorities in Washington have not proposed any rules or oversight around this rapidly evolving artificial intelligence technology as it is employed in healthcare contexts.

The Coalition for Health AI, an alliance representing major technology companies, academic institutions, and healthcare systems, believes robust standards are crucial to ensuring chatbots pull from reliable, fully-vetted sources when operating in medical advice capacities. "We need appropriate safeguards because we don't want individuals inappropriately led to believe that something is a diagnosis or a potential treatment option when it may be unsupported or even dangerous misinformation," said the group's co-founder Brian Anderson. However, any decisions around limiting the data used to train these chatbots could raise complex issues around free speech, as highlighted by an upcoming Supreme Court case examining whether the Biden administration violated First Amendment rights by pressuring social media platforms to censor certain pandemic misinformation.

While Google's new Gemini chatbot explicitly instructs users not to rely on it for health advice, it still provides such information when queried on medical topics - leading to inaccurate or biased outputs that prompted a public apology from CEO Sundar Pichai just last month. Meanwhile, physicians report a rising number of patients showing up to appointments armed with potential explanations for their symptoms that were suggested by chatbots, which studies show sometimes uncover rare conditions but more often get diagnoses completely wrong. One assessment published in January found ChatGPT provided the incorrect diagnosis over 80% of the time when presented with details on pediatric medical cases.

"A chatbot AI assistant is at best a tool to get to know about the general characteristics of a disease, but it cannot substitute the role of an actual human doctor," said radiologist Som Biswas of the University of Tennessee Health Science Center, who has written on the implications of ChatGPT. Biswas emphasized that real medical professionals offer expert care and observational powers no current language model can match, stating, "You see the patient, feel his pulse, look at whether he has an actual cough — all of these crucial diagnostic things are simply impossible for a large language model chatbot to do based on pure text inputs."

While the tendency toward self-diagnosis is nothing new, with patients searching online for decades to attempt to deduce their symptoms, health experts fear powerful AI chatbots could increasingly and dangerously amplify unfounded tendencies toward ignoring doctors and trying to self-prescribe inappropriate treatment plans. "Reputable medical resources have always cautioned that online symptom-checkers should be taken with a grain of salt, as they lack the comprehensive understanding of human biology and inability to examine a patient physically," said Dr. Leana Wen, an emergency physician and public health professor at George Washington University. "These new chatbots don't fundamentally suffer from different limitations; they just come across as more authoritative in how they communicate, which could lend their voice an unearned credibility."

With the government only beginning to grapple with overarching AI regulation, no clear guidelines exist yet for these chatbots operating in healthcare advice capacities. Rules finalized in December by the federal health IT agency do require AI developers to share details on training data and product validation to healthcare providers acquiring such technologies, but this does not address public-facing medical chatbots from major tech companies. The Federal Trade Commission could theoretically take action against consumer harm if it found evidence of chatbots providing dangerous health misinformation, but no enforcement priorities have been outlined, and the rapidly evolving tech seems to have outpaced existing oversight.

When queried on medical topics, ChatGPT routinely notes it is "not a doctor" but still proceeds to offer specific diagnoses and home treatment suggestions regardless - which academic evaluations have found can be unreliable or flat-out mistaken. While the bot has, in some cases, helped identify rare conditions that initially baffled human physicians, wide-ranging analyses show ChatGPT frequently fails to separate credible medical data from misinformation, pseudoscience, or biased information. For example, when recently questioned about vaccine dosages, ChatGPT provided this reporter with factually incorrect guidance about age authorizations for young children.

Perhaps more concerningly, the language model has also repeatedly characterized discredited COVID-19 "treatments" like ivermectin and hydroxychloroquine as ongoing "controversial" subjects without clearly explaining the expert medical consensus that these therapies are ineffective against the virus. Such examples highlight physician worries that chatbots could end up validating ill-advised treatment ideas by couching them as legitimately "controversial" rather than debunked. Despite making such factual errors, research indicates that many people find advanced AI assistants' coherent, contextualized, and empathetic language to be more understandable than actual doctor consultations. There are also concerns about the bots' core design, which aims to be deferential and tell users what they want to hear, even if that means authenticating rather than challenging flawed assumptions.

This credibility and mimicry of human approval could spur more patients to forgo doctor visits altogether or attempt to coerce physicians into authorizing unsupported testing or treatments that were recommended by a chatbot advice assistant speaking with an authoritative tone. "I can't imagine any technology with this much potential influence not having some kind of safety guardrails in the healthcare space," said emergency physician Dr. Jeremy Faust from Boston's Brigham and Women's Hospital, who argues ChatGPT should simply state proven ineffective "miracle cures" like those endorsed during the Trump administration unambiguously failed, rather than suggesting legitimacy by labeling them "controversial."    

Potential solutions could involve direct government regulation to curb health misinformation from AI chatbots or an industry-led approach establishing robustly enforced transparency and accuracy standards for medical advice capabilities, like the model Anderson's coalition envisions. But complex questions remain around what guidelines should look like, which data sources qualify as trustworthy enough for training these rapidly evolving language models, how to balance patient safety priorities with free speech principles, and whether a fractured approach would simply shift patients toward untamed chatbots on the open internet.

As the thorny debates over governing chatbot healthcare advice continue, physicians emphasize their crucial role in helping patients navigate the avalanche of information - some potentially useful, but much of it unreliable or outright dangerous pseudoscience and amateur suggestions. "Part of the conversation around responsible AI use in healthcare that has frankly gone under-discussed is how we rapidly need to upskill and train medical providers to identify and counteract chatbot misinformation that could increasingly spread among patient populations," Anderson said. "Having doctors and nurses who can authoritatively contextualize a patient's concerning chatbot printout, or validate accurate information while firmly rejecting unsubstantiated claims, will only become more vital." Faust summarized the frontline responsibility simply: "That's the job of healthcare workers now - to help patients navigate this new world through the haze of AI-generated data overload."

Popular

Pat Beverley throws ball at Pacers fans, later tells reporter to leave his locker-room interview

Milwaukee Bucks guard Pat Beverley indicated a video showing him throwing a ball at a spectator on Thursday was misleading but later added that “I have to be better.”

Biden administration says 100,000 new migrants are expected to enroll in 'Obamacare' next year

Roughly 100,000 immigrants who were brought to the U.S. as children are expected to enroll in the Affordable Care Act’s health insurance next year

Mexican authorities search for missing Australian, US tourists

By Lizbeth Diaz MEXICO CITY (Reuters) - Mexican authorities said on Thursday they were searching at sea and on land for two Australians and one American reported missing in Baja California, one of the

Soccer-Maradona's children call for moving body to mausoleum for safety and tribute

BUENOS AIRES (Reuters) - Diego Maradona's children filed a request with the Argentine Courts on Thursday to move the former football great's body "to a much safer place" and for fans "to pay tribute"

Related

Big Oil finds more to love in deepwater exploration fields

Foxconn reiterates Q2 revenue to grow, posts record April sales

Fake videos of Modi aides trigger political showdown in India election

Ukraine marks its third Easter at war under fire from Russian drones

- Advertisement -
Advertisement: Limited Time Offer