Steps to successful and ethical AI adoption in healthcare

Steps to successful and ethical AI adoption in healthcare

DATE
November 7, 2023
SHARE
The Language of Genomes

In this opinion piece, Katie King (AI in Business; London, UK), explores the challenges and rewards of AI adoption journeys in healthcare. Katie argues that although AI has enormous potential within medicine, it is imperative to strike a balance between reliance on AI and human expertise in order to reap the benefits of AI without causing harm. Often the hardest step in an organization’s AI adoption journey is the first one. It can be challenging to know where to begin, who to turn to, and how to get it right. In healthcare and medicine, the stakes are particularly high. Inaccurate AI decisions can have harmful outcomes, and the vast amounts of sensitive data needed to train healthcare-focused algorithms must be handled safely and securely to ensure patient privacy is maintained. However, the benefits of using AI in this industry far outweigh the risks, with technology already contributing to massive breakthroughs in drug discovery, disease diagnosis, patient care and healthcare administration. So how do you strike the right balance to ensure you are reaping the benefits of AI, but doing so without causing harm? What ethical considerations must be made? Getting started with AI Every AI adoption needs to begin with a problem to solve. No organization can afford to indulge in vanity AI projects, and therefore it is essential to be clear on what exactly you are looking to achieve by adopting technology. The goal can be hyper-specific to the needs of the organization, or it can be something broader. In healthcare, this might include streamlining a specific process, helping alleviate some of the pressure on busy staff, providing higher qualities of care, improving diagnostic accuracy, and so on. Numerous AI projects fail due to an absence of concrete goals, which is why strategy needs to be the first step in the adoption journey. Knowing what problems you are looking to solve will set the tone for your entire adoption journey and help to create alignment around your specific goals, and all subsequent decisions can be made with those objectives in mind. However, AI is not a magic button you can push to generate instant results. These projects require dedicated time, a realistic expectation of results, and a commitment to a range of short-, medium- and long-term planning. It can be difficult to estimate the return on investment and likely duration of an AI project, which can pose barriers for generating support and securing investment and buy-in. A firm dose of realism is necessary. AI is capable of incredible things but beware of inflated expectations. Widespread AI adoption is an iterative process, with productivity and efficiency benefits appearing at different stages in the journey. Time and patience are essential, with incremental gains leading to more major benefits. Because of the benefits it can bring, a common misconception about AI is that it is expensive. The vendors and experts you turn to can also make or break your efforts. With so many players in the AI space, it can be difficult to know who to turn to. When considering tools and vendors, you will need to keep the strategy you set earlier on in mind. The best implementation plans and services for your organization will be based on your needs, and what works for another business may not work for yours. Some cases may simply require a subscription to an existing service, others may be solved with a one-off expense, and more complex projects might require an in-depth solution such as the creation of bespoke tools. These factors combined with the vendors you entrust to deliver them will determine what the project will ultimately cost. Who you trust externally is important, but even more crucial is the support you can generate internally. Adopting AI requires an effective, collaborative team effort between numerous different departments and introducing new technology will alter ways of working that, in many cases, will have been long-established as best practice. This will likely cause discomfort and potential resistance as your team is asked to stray from their day-to-day routines. Successful AI adoption will require reshaping organizational cultures and mindsets. You may find that reskilling or upskilling your staff is necessary for ensuring that the human-technology workforce partnership is successful. Clear communication of goals, objectives, and expectations will make your people feel like they are active participants in this new chapter of your business. It is also important to move beyond hype and understand the role data plays in creating effective AI tools that can generate the desired benefits, especially in a healthcare setting. AI is wholly reliant on data to function. It learns from the information it is fed so that it can know what to look for and what behaviors to replicate in its functions. In healthcare, this will likely require the use of patient data so that AI can learn about certain conditions, symptoms, treatments, and so on. This information is mission-critical for ensuring the most accurate AI outcomes possible, but consideration must be paid to how this data is used and stored. Compliance must be upheld. Key ethical considerations Data privacy and security are of course major concerns for AI adoption in healthcare, but attention must be paid to other potential ethical threats. The first of these is bias. While AI is not inherently biased as a technology, it can produce biased outcomes if trained or used improperly. This often happens as a result of the data the AI is trained on. If it is skewed in any way, it will produce results that confirm these biases. Take the use of AI for diagnosing skin cancer for example. AI models are typically trained on large datasets of skin images to help detect abnormalities that may indicate the presence of this condition. If these datasets are not adequately diverse and do not include images of skin conditions in individuals with various skin tones, it can lead to the underrepresentation of certain racial and ethnic groups and may perform less accurately when diagnosing skin conditions in people with darker skin tones. And since AI systems learn from historical data, if historical medical records contain disparities in the diagnosis and documentation of skin conditions across different demographic groups, these biases risk being perpetuated by AI. That said, bias is not the only potential source of misdiagnosis. AI is an incredibly intelligent technology, but not a perfect one. It can at times get it wrong. In healthcare, an incorrect diagnosis can truly be of life-or-death criticality. A false positive or negative will impact the treatment plan created, the drugs administered, and the patient’s prognosis. Any of these factors being incorrect can worsen a patient’s condition, result in treatment for a condition they do not have, waste valuable time needed for treating them properly, and cause undue stress or negative mental health outcomes. Entrusting AI with these critical healthcare practices also creates the potential of putting too much faith in this technology. An overreliance on AI may lead to a reduction in critical thinking and clinical skills among healthcare providers, who may rely too heavily on AI recommendations rather than applying their own professional judgment. Should this happen, it could lead to biased outcomes and misdiagnoses slipping through the cracks and producing the aforementioned negative effects. The key to avoiding ethical failures in AI is to keep human intelligence in the loop to provide oversight. This technology is incredible, but not infallible. It can and will make mistakes. It is not all-knowing or all-powerful. It does not have the contextual knowledge that we have as people. Partnering human intelligence with technology is the best way to ensure that desired outcomes are reached. In conclusion, the successful introduction of AI into the healthcare domain requires a pragmatic approach, careful selection of tools and vendors, and internal collaboration to navigate a changing landscape. In the realm of healthcare where the consequences of misdiagnosis are profound, it is imperative to strike a balance between reliance on AI and human expertise. The path forward lies in leveraging AI's strengths while ensuring that human intelligence continues to provide oversight, aligning innovation with ethical healthcare practices. Disclaimers:The opinions expressed in this feature are those of the author and do not necessarily reflect the views of Future Medicine AI Hub or Future Science Group.