Is generative AI a reliable tool for medical self-diagnosis?
July 2023   BODY & MIND

Is generative AI a reliable tool for medical self-diagnosis?

Artificial intelligence (AI) are the words or rather the acronym on everyone’s lips right now, as its advancement hits everyday life through the likes of ChatGPT. One use often talked about is in the world of health and self-diagnosis. We take a look at how it can be used, and the pros and cons involved...
Scroll to read

Artificial intelligence (AI) is rapidly taking the world by storm. The technology has its origins in the 1950s but with the advancements in computing power and the huge volume of data available, the speed of advancement has increased exponentially in recent years. Already AI has created radical advances across many industries but there remains great potential for more in the near future. One industry in which AI's potential is being particularly closely examined is healthcare – generative AI tools, such as ChatGPT, are capable of providing support for clinical decisions, translating medical jargon and summarising drug information.

The most intriguing potential ability of generative AI in healthcare is its capacity to diagnose disease, specifically as a tool for self-diagnosis. A rising trend over the past few years, self-diagnosis is a growing practice as people’s primary access point for healthcare information has shifted from professionals to the internet. The internet provides fast, easily accessible and free information at the same time 40% of people are struggling to get an appointment quickly when sick (according to the respondents of the Cigna Healthcare 360 Well-being Survey 2023), so it’s no surprise this shift is occurring.  It’s important to note that a medical diagnosis is the conclusion of an assessment by a medical professional, and people should not rely on what is often inaccurate or incomplete information from the internet.

With ChatGPT remarkably passing the US medical licencing exam3, generative AI tools have therefore been seen as an antidote to this issue, providing reliable diagnoses at the push of a button. But while the performance of generative AI may be impressive, it is by no means infallible - in real-world medical situations, the efficacy of generative AI today is still in serious doubt4. The technology has the potential to reap huge benefits for users if best practices are followed in leveraging AI responsibly. Still, it is important to recognise there are serious limitations to its use for self-diagnosis.

Man sitting on a sofa with a baby on his lap. He’s searching the internet and using his mobile phone

Potential benefits of using generative AI for medical self-diagnosis

Five key potential benefits:

●        Accessibility

●        Triaging

●        Enhancing patient engagement and health literacy

●        Anonymity

●        Cost-effectiveness

Accessibility is arguably the strongest benefit of generative AI, providing individuals with quick and convenient access to medical information. This could be particularly useful in areas where access to healthcare is limited. A study by the World Health Organization (WHO) revealed that over 40% of the global population has limited access to essential healthcare services5 - providing access to generative AI could be a more efficient and cost-effective solution. "Access to care continues to be one of the social and environmental determinants that affects individuals globally, which can increase the risk of negative outcomes," says Pamela Berger, Clinical Program Director at Cigna Healthcare. "The integration of provider and ChatGPT, which allows providers and patients to automate access and treatment by scheduling, decision support and remote patient monitoring, could improve patient access and support with their physician and with managing their own care."

With hospitals in many countries struggling with a low doctor-to-patient ratio, a key advantage of generative AI could also be its use in triaging i.e. determining when medical attention is necessary. This function can help to free up scarce clinical resources, allowing healthcare providers to focus on more critical cases and to avoid unnecessary medical costs for patients. Generative AI has been shown as incredibly effective in this area, with some studies showing it could be even more accurate than humans6.

By providing medical information in an easily understandable format, generative AI could help empower patients to take charge of their health and make informed decisions. "Health literacy has been connected to having better health and lower health risks," says Berger. "Personal responsibility for health education may also allow for individuals to effectively apply the health information in addition to just reading and understanding it."

Improved health literacy could also tackle the issue that, when it comes to health, patients often hesitate to discuss sensitive issues due to privacy concerns; with generative AI, information can be accessed anonymously. In this way, improved health literacy is critical for individuals but can also help to reduce the load on medical professionals.

A final and significant benefit of generative AI tools is the potential to reduce medical costs for patients and healthcare providers significantly. As data from Accenture estimates, AI applications in healthcare could save up to $150 billion annually for the US healthcare economy by 20267

Limitations of generative AI for medical self-diagnosis

Generative AI has great potential, but there are limitations:

●        Inaccurate information

●        Misinterpretation of information

●        Risk of ignoring medical advice

●        Ethical concerns

The key practical issue with generative AI in self-diagnosis is that it may provide false information. AI draws on the entire internet for answers to questions, however some sources are not always accurate, and information is often misinterpreted - in fact, a study published in JAMA Internal Medicine found that only 34% of health-related online searches resulted in accurate self-diagnosis. For example, an American emergency medicine physician recently gave an account of how they asked ChatGPT to provide the possible diagnoses of a young woman with lower abdominal pain. The machine gave numerous credible diagnoses, such as appendicitis and ovarian cyst problems, but it missed ectopic pregnancy8.

Furthermore, in an ever-changing research environment, it takes time for AI responses to reflect new knowledge and information. Not only that, but information can also be inaccurate if it is not personalised. Every patient has unique medical history and needs, yet AI tools do not have access to family history, general medical history, medication usage, diet, weight, height, and other lifestyle factors unless they are entered9.

As such, relying solely on generative AI tools is unwise and unsafe. They need to be used as complimentary, as an additional source of information and not as a replacement for medical advice from a professional. Certainly, AI thrives as a means of accessing general information and simplifying it into consumable form, it should not be used as an infallible resource.

Beyond these many practical concerns, the issues that lie at the heart of the AI debate are ethical concerns. Accountability, data privacy, and bias are significant ethical concerns in AI-driven healthcare, explains Dr Michael Aratow MD, Co-Founder and Chief Medical Officer of Ellipsis Health, an AI-powered vocal biomarker technology for mental health. These ethical issues often stem from the quality of the database. "A machine learning model only produces an AI product as good as the database the model was trained upon. Time and again, inaccuracies, 'hallucinations', and biased data have been discovered in different AI products due to the poor curation of the database. Simply put, the database must reflect the target population if the model is to provide a valid prediction for that population," he states.

Ensuring the databases that train algorithms represent all ethnicities and races in a population is essential to avoiding bias. A study in Science has revealed that some AI algorithms used in healthcare currently have demonstrated racial and gender biases, which risks further exacerbating existing health disaprities10.

"This leads to the question of accountability and transparency in the composition of training data. It's paramount to identifying gaps that can lead to bias," continues Aratow. "Without revealing IP (Intellectual Property), companies can report on the demographics of the training data to demonstrate the adequacy of their products to a target population. This practice also forces the companies to look inward at themselves to guide the creation of their training database in a responsible way to avoid disparity and create awareness of the challenge."

Aratow reflects how history has shown that companies cannot always be trusted when the incentive for profit is not aligned with incentives for consumer protection.

"To that end, federal regulation will be necessary on the road to AI accountability," he argues. "While NGOs like the EFF (Electronic Frontier Foundation) can provide guidance, exposure and suggested policy, it is the government that has the resources and influence for effective enforcement."

Aratow further advises that the issue of privacy must be a fundamental consideration for both providers and end users of generative AI.

"The last mile for privacy in this digital age will always be with the end user. Being an aware digital citizen means educating oneself on the type of generative AI product one uses: Is it publicly facing or private? Does the data a user enters have the possibility of being sold to third parties? Are the prompts they enter revealing the IP of their employer, colleague, or client? The amount of data collected on end users now will pale in comparison to what is possible to be collected with generative AI products, so the more checks and balances that can be erected to preserve privacy, the better."

Young man in discussion with a doctor in a surgery,  they are sitting at a desk opposite each other

Best practices for using generative AI in medical self-diagnosis 

1. Verify information with trusted medical sources. Users should cross-check advice from generative AI tools with reputable sources like the CDC, WHO, or peer-reviewed medical journals. 

2. Recognise the limitations of AI in healthcare. Understand that AI is not infallible and should be used as a supplementary tool rather than a replacement for clinical advice. 

3. Consult a healthcare professional for definitive diagnoses and treatment plans. Always consult a healthcare professional before making any health decisions based on AI advice. 

4. Integrate AI into existing telemedicine platforms. By incorporating AI into telemedicine platforms, healthcare providers can offer enhanced support and guidance to patients, ensuring they receive accurate and personalised care. 

5. Consider the ethical implications of AI-driven healthcare. Developers, healthcare providers, and users should be aware of the potential ethical concerns surrounding AI-driven healthcare and work to address issues like accountability, data privacy, and bias.

The future of AI-driven medical self-diagnosis

As AI continues to evolve, it will inevitably play a central role in healthcare. Already there are large language models (LLMs) such as MedPaLM being curated specifically for clinical and medical purposes, and the global ‘AI in healthcare’ market is projected to reach $45.2 billion by 2026, according to a report by MarketsandMarkets11.

"The impacts on healthcare typically start with low-hanging fruit, such as summarisation of clinical encounters and chatbots to handle administrative operations," says Aratow. "But we’re already seeing it progress to more advanced applications like guidance to clinicians on diagnosis and treatment, and eventually we’ll see things like smart avatars that bridge encounters with human providers."

Aratow and Berger both recognise there are many different areas in which AI could help improve public health outcomes, from enhancing patient engagement and health literacy to increasing access to care. AI-powered apps like Ada Health and Babylon Health have already helped millions of users worldwide manage their health more effectively.

"AI will give significant empowerment to the patient by providing health education

that is not hidden behind esoteric search functions or the dizzying array of health information sources on the internet," he says. "A more informed public, as the result of disseminating best practices for healthy behaviours personalised to the individual, is possible, which will lead to decreased morbidity and mortality."

Ultimately, at the centre of this discussion is how healthcare providers and AI developers can work together, developing systems that produce optimal outcomes. "Healthcare providers must ensure that the training data for any AI product is legitimate, clinically validated, and creates models which output information that leads to the desired outcome for the targeted users: this is not the knowledge base, competence, or purview of the typical AI developer."

"Likewise, the AI developer must train the medical community on the mechanics, capabilities, and potential of AI so that with this understanding, this community will facilitate its development through such things as access to valuable training data, technology pilot opportunities within the clinical environment, and partnerships for validation studies."

There is no doubt that generative AI is revolutionising the medical industry – new data from Accenture suggest that 40% of all working hours in healthcare could soon be supported or augmented by language-based AI12. For medical self-diagnosis, specifically, AI offers numerous benefits, however it is vital to recognise the current limitations and ethical concerns associated with AI-driven healthcare. The aim should be to harness the power of AI rather than to rely on it entirely - by layering AI into our healthcare systems and harmonising the technology with humans, providers can gain access to deep insights that can help inform care and ensure responsible use. As AI continues to advance, the future of medical self-diagnosis will likely involve even greater collaboration between AI developers and healthcare providers, but for now generative AI is still in the early stages of its potential.

This information is for educational purposes only. It is not medical advice. Your use of this information is at your sole risk. 

 

Have a question? We’re here to help.

If you need expat health insurance alongside your travel insurance, we have options to suit your needs and budget.

1 Davies, J. E. (2022). 'The Appeal, and the Peril, of Self-Diagnosis'. Psychology Today. Retrieved from: https://www.psychologytoday.com/gb/blog/our-new-discontents/202209/the-appeal-and-the-peril-self-diagnosis#:~:text=So%2Dcalled%20self%2Ddiagnosis%20of,a%20disorder%20is%20widely%20documented

2 Suarez-Lledo V, Alvarez-Galvez J. (2021). 'Prevalence of Health Misinformation on Social Media: Systematic Review'. Journal of Medical Internet Research, Vol. 23(1).

3 Kennedy, S. (2023). 'ChatGPT Passes US Medical Licensing Exam Without Clinician Input'. Health IT Analytics. Retrieved from: https://healthitanalytics.com/news/chatgpt-passes-us-medical-licensing-exam-without-clinician-input

4 Tamayo-Sarver, J. (2023). 'I'm an ER doctor: Here's what I found when I asked ChatGPT to diagnose my patients'. Fast Company. Retrieved from: https://www.fastcompany.com/90863983/chatgpt-medical-diagnosis-emergency-room

5 World Health Organization. (2016). 'Health workforce requirements for universal health coverage and the Sustainable Development Goals'. Human Resources for Health Observer, Vol. 17. Retrieved from https://apps.who.int/iris/handle/10665/250330

6 Baker, A. et al. (2020). ‘A Comparison of Artificial Intelligence and Human Doctors for the Purpose of Triage and Diagnosis’. Front. Artif. Intell, Vol 3.

7 Accenture. (2018). Artificial Intelligence: Healthcare's New Nervous System. Retrieved from https://www.accenture.com/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf

8 See entry 4    

9 Hui, A. (2023). 'How You Should—and Shouldn't—Use ChatGPT for Medical Advice'. Yahoo News. Accessed from: https://news.yahoo.com/shouldnt-chatgpt-medical-advice-185441804.html

10 Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). 'Dissecting racial bias in an algorithm used to manage the health of populations'. Science, Vol. 366(6464), p447-453.

11 MarketsandMarkets. (2021). 'Artificial Intelligence in Healthcare Market by Offering (Hardware, Software, Services), Technology (Machine Learning, NLP, Context-Aware Computing, Computer Vision), End-Use Application, End User, and Geography - Global Forecast to 2026'. Retrieved from: https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-healthcare-market-54679303.html

12 Daugherty, P., Ghosh, B., Narain, K., Guan, L., & Wilson, J. (2023). 'A new era of generative AI for everyone'. Accenture. Retrieved from: https://www.accenture.com/us-en/insights/technology/generative-ai?c=acn_glb_largelanguagemomediarelations_13427684&n=mrl_0323

Related articles
Sara Morris smiling

A day in the life of… Sara M

Sara M, Senior Branding Advisor, talks through a day in her life at Cigna Healthcare as well as her passion for Scotland’s great outdoors, ‘Munro Bagging’, and avoiding an angry goose.

Kids playing

Happiness vs. contentment: pathways to a fulfilled and enriched life

Ever wondered if you are truly happy? We meet founder and CEO of The Happiness Club, Jo Howarth, to understand the nuances of being happy and feeling content