Ask the experts the medical potential of ChatGPT
Ask the experts the medical potential of ChatGPT
In this ‘Ask the Experts’ feature, we spoke to a panel of experts working throughout the field of AI healthcare to gain an insight into their current perspectives on the medical potential of ChatGPT. We discuss the main barriers that prevent the widespread adoption of ChatGPT into healthcare, any ethical concerns of using ChatGPT in medicine and if our experts think that any areas in medicine will come to routinely use ChatGPT in the future. Discover more about this topic from our experts: Daniela Haluza (Medical University of Vienna; Austria), David Jungwirth (HCL Technologies; Vienna, Austria), Brandon Allgood (Allgood Consulting; WA, USA) and James Somauroo (SomX; London, UK).
- What is the difference between a chatbot and AI?
 - In your opinion, why do you think ChatGPT was created?
 - Do you think ChatGPT can or will impact medical school admissions?
 - What are the main barriers preventing the wide-spread adoption of ChatGPT into medicine/healthcare?
 - What do you think the biggest dangers/ethical concerns are to using ChatGPT in medicine?
 - What types of machine learning/AI assistant applications are currently used within healthcare/medicine?
 - Are there any areas within medicine that you think will come to routinely use ChatGPT in the future?
 - What do you think about ChatGPT being used as a chatbot in a doctor’s office or GP surgery?
 - Is there anything that ChatGPT would enable you to do that you couldn’t do previously?
 - How do you think ChatGPT will expand to affect medicine in the next 5 years?
 - What do you think the full medical potential of ChatGPT is?
 
What is the difference between a chatbot and AI?
Daniela Haluza (DH): Well, while chatbots can be a type of AI, they are not the same thing. Chatbots typically use simple AI techniques like rule-based systems or pattern recognition, whereas more complex AI systems can be used for more sophisticated applications. Self-driving cars or medical diagnosis for example.
David Jungwirth (DJ): A chatbot is the interface to something – and we all know the frustration when legacy chatbots from earlier days did not understand what we were typing and responded meaninglessly. The new generation of chatbots is different, they really “understand” us. They leverage complex AI technologies instead of simple rule-based systems. Today’s chatbots with all their underlying AI capabilities – can be named AI’s themselves…
Brandon Allgood (BA): AI is the study of intelligent agents. An intelligent agent is anything which perceives its environment, takes actions autonomously to achieve goals, and may improve its performance with learning or acquiring knowledge. A chatbot is an example of an intelligent agent. ML is a subfield of AI where the agent contains a model that was not directly programmed by a human but was constructed by an ML program through the process of learning partners from data presented to it during the training process. A chatbot may or may not have ML components. Early chatbots do not, ChatGPT does, namely the GPT-3 large language model (LLM).
James Somauroo (JS): A traditional chatbot is a computer program designed to simulate conversation with human users, typically using pre-programmed responses to answer questions or provide information. Most of us will encounter chatbots on a daily basis - our banks, electricity and mobile phone providers will all have customer service chatbots on their website to help customers navigate common problems.
AI chatbots such as ChatGPT, on the other hand, are built around a language model that is trained to generate human-like responses to text-based inputs. Rather than using pre-programmed responses, ChatGPT generates responses based on its understanding of language and the context of the input it receives. This means that ChatGPT and other AI conversation bots can provide more natural and flexible responses that can vary based on the specific input they receive.
In your opinion, why do you think ChatGPT was created?
DJ: The launch of ChatGPT was a revolution – it democratized access to AI technology for everybody. While AIs were previously “hidden” and mainly available to technicians on a command line – ChatGPT can simply be used by anyone. 
DH: I think the time was just right for an easy-to-use chatbot that can help people communicate more effectively with machines and provide them with intelligent assistance in the form of human-like responses.
BA: Honestly, there is currently an arms race happening in the field of large language and large image models. GPT-3 and consequently ChatGPT were created to push the boundaries of LLMs and expose them to the broader public. OpenAI’s CEO openly admitted that they did not have a monetization plan when developing and launching ChatGPT and GPT-3, but that given the computer costs (~$100, 000 per day) they would quickly need to figure one out.
Do you think ChatGPT can or will impact medical school admissions?
BA: This is a bit out of my field, but my guess is no. People have shown that when presented with medical questions, ChatGPT returns factual answers, but ones that do not yet show a deep enough understanding of the concepts and use language geared toward a lay audience.
What are the main barriers preventing the wide-spread adoption of ChatGPT into medicine/healthcare?
JS: ChatGPT and AI language models are still very new phenomena; many of us in the health and medical world are busy trying to get our heads around the strengths, weaknesses, opportunities and limitations of the technology.
Before any new technology or tool is adopted into healthcare, it must first undergo extensive testing and validation. Of course, the same rules apply to ChatGPT - we need to understand how accurate it is and how it can be optimized and used most effectively. This will take a good deal of time and effort. Another barrier is the need for healthcare professionals to be trained on how to integrate ChatGPT into their workflows, and how to use it in conjunction with other clinical tools and resources to improve efficiency.
Currently, ChatGPT does not have access to the most up-to-date health research and data. This must be resolved before it can be of real use in a clinical capacity and restricts it to applications in process optimization. Anyone using it anywhere near healthcare must consider the question: is this a medical device? And must seek regulatory approval accordingly.
Although there are obstacles to navigate before ChatGPT is used by every clinician, having witnessed just a handful of the ways that AI language models can support productivity and save time, their adoption into healthcare systems may only be a matter of time.
DH: Data privacy and security concerns are among the most obvious barriers for adopting ChatGPT in healthcare. Also, developing and implementing AI systems in healthcare is expensive, requiring significant investment in hardware, software and expertise. The potential benefits of AI in healthcare will need to be weighed against the costs of implementation and ongoing maintenance, let alone the climate change-related considerations regarding the energy demand of servers.
DJ: Latest in the mid-term, there is simply no way around it. Microsoft, one of the owners of ChatGPT, already announced a new chat-interface to all their MS Office products. It does not only write poems or answer intelligently, but will have access to your specific training data, perform tasks in your presentations, schedule patients’ appointments and send reminderson compliant medicine intake. Nevertheless, even today, AI is an integral part of many curative care applications – like specific cancer detections
BA: There are two barriers. The first, as I stated above, is that the structure of scientific and medical language, especially written language, is less precise, more colloquial and not yet exhibiting a deep enough understanding of the subject matter. The second is the fact that scientific and medical language uses a different vocabulary and word corpus than the training set used to train GPT-3. This means that while training the model, the model was taught to give less weight to less common words. Given that the training set was pulled from the web at large and other datasets, which does have scientific and medical texts, they were a small minority. This means the model did not pay as much attention to these texts. Previous efforts like BioGPT based on GPT-2 structure from Microsoft, show that LLMs trained on medical and scientific text specifically outperform the more generic LLMs when it comes to understanding and generating scientifically correct texts.
What do you think the biggest dangers/ethical concerns are to using ChatGPT in medicine?
DH: As we all know, medical data is highly sensitive and subject to strict regulations in regard to privacy and security. I believe that it is important to note that any AI system used in healthcare must comply with these regulations and ensure the security of sensitive patient data.
DJ: Ethically, AI systems typically manifest inequalities and increase discriminations. Ethical committees will become more important than ever.
BA: One danger is that you do not know when your question is out-of-scope and when ChatGPT ‘hallucinates.’ ChatGPT will always return an answer. If you ask it a question it cannot answer, it will still attempt to answer it, even if it is completely fantastical and it gives no indication of confidence. To the layperson and often the experts, these fantastical responses can be hard to identify, and false information will be generated. The second danger has to do with my previous response about why companies keep building LLM’s like GPT-3 and the awaited GPT-4. It is an arms race and the larger the training set, the bigger the splash. Because of this push to ingest data for data’s sack without concern about the accuracy and moral quality of the data, these LLM’s have been shown to exhibit morally questionable behaviors, such as racism and sexism.
JS: Concerns have been expressed regarding ChatGPT’s risk to patient confidentiality and data security if used in medical settings. We would need to put in place really good encryption, stringent data breach prevention measures, and strict rules about who could access the chat logs. Other concerns that would need to be addressed include the regulatory and liability grey area: who would ultimately be responsible for the outcomes of the use of ChatGPT for medical purposes? Would insurers cover this? Until CE-marked or UKCA-marked, it cannot be used as a medical device, i.e. in any diagnostic or treatment capacity. Of course there is also the issue of accuracy. We know that ChatGPT relies on vast amounts of data to generate its responses, but if this data is biased or incorrect then the responses will be too. Anyone who thinks that ChatGPT is set to ‘replace’ doctors is very much jumping the gun!
What types of machine learning/AI assistant applications are currently used within healthcare/medicine?
BA: Chatbots are being used more and more for patient/customer question answering and interactions. These are however more restricted and monitored for responses. Medical imaging assistants are the most widely used to help specialists like radiologists and oncologists, and to understand and interpret radiological images. There is also early use of recommender solutions to help doctors and nurses choose the next action to take with a patient. For example, sepsis prediction and treatment systems are in early use. 
DH: Machine learning and AI assistants are currently used for data intensive tasks such as for medical image analysis in pathology and radiology, as virtual nursing assistants and for personalized treatment recommendations.
DJ: AI and machine learning were researched and used in many fields of curative care over the last years. The next big improvements will come in the field of preventive care – predictive back prevention, specialized fitness apps and individualized disease prevention are on the rise.
Are there any areas within medicine that you think will come to routinely use ChatGPT in the future?
DH: Generally speaking, I see a high potential for ChatGPT use in assisting patients with medication management and helping patients navigate their healthcare journey. Also, ChatGPT could be used to help researchers analyze large datasets and identify patterns in medical data, which could lead to new discoveries and improved patient care, hopefully. 
JS: I could see a world where ChatGPT (or a similar language model) was integrated into the system for responding to NHS 111 calls in the UK. When patients ring 111 currently, firstly, it is non-urgent, and they have to go through a script with the human call handlers before being directed to the right treatment option. This whole process could be accelerated if they just typed their problem into a platform like ChatGPT and then received instant information or follow-up questions using the same, or similar, scripts. That said, to pick up the nuance of a human-to-human conversation in terms of identifying an unspoken emergency (and other similar use-cases) would have to be of consideration. I could see a ChatGPT-style model being used to triage a specific group of conditions really well, with all known red and amber flags being built in to escalate safely.
The technology could also be used to help patients complete routine questionnaires like the Generalized Anxiety Disorder Assessment (GAD-7) & Patient Health Questionnaire (PHQ-9), though, taking those as examples, the value is still in the interpretation by a physician; particularly here, as those questionnaires relate to mental health where someone who knew and understood the patient might read beyond the data inputs. If asked to read the thousands of medical research papers produced every year, ChatGPT-type technology could (and I believe is starting to) play a valuable role in helping researchers ‘join the dots’ of new information in a way that no single human brain ever could. The insights could pave the way for faster breakthroughs and more connected thinking.
DJ: I think triaging patients, booking appointments, encouraging and ensuring patient compliance, documenting patient history and complaints before seeing a doctor, and even creating a first draft of diagnosis could all be handled by conversational AIs routinely.
BA: There will likely be the use of ChatGPT in medicine when communicating with lay people, like patients, or other researchers outside of your specific field. I think that vetted medical-specific versions of ChatGPT will be required for real penetration. Once this happens these tools will be used to perform mundane tasks, such as answering background and simple research questions, thus further freeing the doctor to focus on patient care and the researcher to focus on deep research.
What do you think about ChatGPT being used as a chatbot in a doctor’s office or GP surgery?
JS: ChatGPT could be integrated into appointment booking systems or online chatbots to help provide patients with bespoke self-care information and direct them to the most appropriate source of care. This could help efforts to reduce demand on GP appointments and increase the utilization of community healthcare services. As we have talked about a lot here, to expect it to help clinically requires a lot to be done on the regulatory side before we can get too excited. 
DJ: Today, before anybody receives GP treatment, a patient history and complaints form has to be filled. While this was pen and paper in the past, it is performed via forms digitally in many practices already. Conversational AIs can help to simplify and intelligently reduce the number of required questions. The majority of administrative tasks could be handled via chatbots in future.
DH: For me, it is important to note that chatbots should not replace human healthcare professionals. They should rather be used as a complementary tool to support their everyday work. Chatbots could be particularly useful in situations where patients need quick and convenient access to medical advice, or where healthcare professionals are stretched thin and need additional support.
Is there anything that ChatGPT would enable you to do that you could not do previously?
BA: In certain areas it can help me be more efficient (like answering background and definition-type questions instead of searching on the web). But it does not help me do anything I could not do before.
DH: Previously, we did research the use of ChatGPT in science. This is a new and timely exercise given the current research momentum.
DJ: AI can control and organize all my documents, automatically analyze my Excel lists and documents and create new product launch suggestions for digital initiatives. Still, I am not convinced that AI-based supervision of all my data, tasks and work is what I was looking for…
How do you think ChatGPT will expand to affect medicine in the next 5 years?
DH: I anticipate that ChatGPT will become more integrated into the healthcare system and used more frequently to assist with routine and data-based medical tasks. With advancements in natural language processing and machine learning, ChatGPT will become even better at understanding human language and providing accurate medical advice.
DJ: Healthcare was a slow adopter of technical advances over the past years. Today, preventive aspects with fitness and well-being are fast growing markets and we will see many advances and new apps over the next 5 years.
What do you think the full medical potential of ChatGPT is?
BA: In response to the last two questions, I think that ChatGPT and its future derivatives will allow doctors and researchers to focus on things like patient care and deep research. In many ways, it is like much of the technology we have today. It will help us be efficient in mundane tasks, allowing us to spend more time on creative and more forward-looking tasks.
DJ: Neural language processing and conversational interfaces in strong AIs will be present in all areas of healthcare, curative and preventive care. May the future be with us, it has started already.
DH: This is good question. I would say that to date, the full medical potential of ChatGPT is still being explored and will continue to expand. We are doing our best to keep up with the enormous speed of advances in AI with our basic research in this field.
Meet the experts:

James Somauroo: I am a co-founder and CEO of SomX (London, UK), host of The Healthtech Podcast, editor-in-chief of Healthtech Pigeon (London, UK), and a healthtech contributor for Forbes (NJ, USA). I trained as an anesthetics and ICU doctor before taking on roles in policy and innovation at NHS England and Health Education England and later directing the DigitalHealth.London Accelerator. I co-founded SomX, a full-service communications group dedicated to elevating healthtech and biotech, with Jessica Smith in 2020.

Brandon Allgood: I am a serial entrepreneur focused on applying machine learning (ML) and large-scale computational methods to improve human health. My former companies, Valo Health (MA, USA), Numerate (CA, USA), and Pharmix (CA, USA) are and were groundbreaking companies at the forefront of applying modern ML and computational methods to drug discovery and development in diverse subfields, including chemistry, biology, clinical trials and real-world health data. I received a B.S. in Physics from the University of Washington (WA, USA), and a Ph.D. in Theoretical Cosmology from the University of California (CA, USA). I have authored scientific publications in astrophysics, solid-state physics and computational biology and chemistry, and have 18 years of experience in large-scale cloud and distributed computing, AI, and mathematical modeling.

Daniela Haluza: I studied Medicine and Applied Medical Science and currently work as a habilitated associate professor of Public Health at the Medical University of Vienna (Austria). My research focuses on various aspects of Public Health including telehealth and science communication. Recently my colleague David Jungwirth, who has a technical background and I have begun to study the effectiveness of chatbots in research.

David Jungwirth: I have a medical informatics background, have obtained degrees from Vienna University of Technology (Austria), University of Vienna (Austria) and University of Salzburg (Austria) and am an alumnus of the MIT Executive and Leadership program (MA, USA). With 15 years experience in the B2B software industry and as Senior Director Digital Advisory, I help large international organizations to transform organizations for agile ways of working. Alongside business, I conduct scientific research with Daniela Haluza on AI’s impact on human life aspects. 
Disclaimers:
The opinions expressed in this feature are those of the interviewee and do not necessarily reflect the views of Future Medicine AI Hub or Future Science Group.
.avif)
.avif)
.png)



