What to consider when building novel technology for the lay person
What to consider when building novel technology for the lay person
In this opinion piece, Ellington West and Brandon Dottin-Haley (both Sonavi Labs, MD, USA) explore the ways in which we can ensure the successful integration of AI in healthcare. Ellington and Brandon discuss the need for robust data in the clinical validation of AI tools, why we must address algorithmic bias to protect end users, and the importance of defining safety and efficacy standards to increase the transparency of technology development. Globally, there are more than 70,000 startups in the US alone , and many more out there around the world, developing novel solutions and technologies to address some of the world’s most pressing challenges. Since 2017, I have been among those working to bring revolutionary, breakthrough technology to the market. The learnings I have gained along this journey, while they have been unearthed in the AI, technology space, can be deployed across industries. There are three things that will need to be in place to ensure the successful adoption of AI-based solutions in the healthcare space. The first is a development strategy rooted in rigorous evidence generation. There must also be a deliberate effort to include as much diversity as possible in the development team as well as in the populations being served, and ultimately leveraged to generate the aforementioned evidence. Finally, there must be an effort to create collaborative implementation models. Ultimately, these three keys to a successful rollout are inextricably linked. When I think about my own journey building Sonavi Labs (MD, USA), I am constantly reminded of the ways in which our team has had to be diligent in order to bring our technology to market. Sonavi Labs is on a mission to improve patient outcomes and remote patient monitoring programs by harnessing the most advanced acoustic technology available coupled with clinically proven diagnostic AI software. After years of research at Johns Hopkins University (MD, USA), we created the company to deploy Feelix, a remote monitoring platform embedded with clinically validated diagnostic software capable of detecting respiratory diseases and tracking longitudinal trends. The Feelix platform also features proprietary hardware, supportive apps and an integrated cloud platform to provide remote monitoring solutions for patients with chronic diseases such as asthma, COPD, Cystic Fibrosis and congestive heart failure, among others. As we have developed the Feelix technology, we have sought clinical partners to ensure that we have robust data sets and data that clearly validates our claims. Some of our research partners include Johns Hopkins Hospital and the University of Antwerp (Belgium). We have built a global network of research partners because of the need for a tremendous amount of evidence and to ensure diversity was a major component of our research. Many startups make incredible assertions about the capabilities and potential benefits of their products to gain traction and investment. While it is important for companies to be optimistic and forward thinking, managing the expectations of their partners is an important step in mitigating risks. The industry has been tremendously impacted by companies that have made wild assertions without appropriate evidence to prove claims. The downstream effect reduces the willingness of investors to take on risk. Companies must invest in developing clear evidence to validate the efficacy of AI as providers are held accountable for their treatment decisions, which places risk solely on their shoulders. “When neural networks are used, it is often difficult to understand how a specific prediction was generated, meaning without substantial effort, some AI algorithms are so-called “black boxes”, as mentioned by Avi Goldfarb and Florenta Teodoridis . As a result, if there is no one proactively looking to identify problems with a neural network-generated algorithm, there is a substantial risk that the AI will generate solutions with flaws only discoverable after they have been deployed – for examples, see work on “algorithmic bias” . It cannot be overstated how necessary it is for any company delving into the world of innovation to invest in rigorous evidence development. Further, as innovators begin to develop their products, they must include a wide range of variables and inputs. One such variable is the diversity of the sample population as the product is being tested. Diversity is an effort to make products, processes and people better. One needs a diverse cohort to develop and evaluate products for mass consumption, otherwise those products have only been developed for the populations in which they are originally tested. Too often, we find solutions are only exposed to homogenous populations, usually akin to those of the developing team, and while that is largely a result of a globally segregated society, future development requires more nuance, more latitudinal and longitudinal thinking. Striving to create a consensus among diverse stakeholders helps to ensure safe and successful implementation. As one achieves greater diversity among its team members, the likelihood of also creating more robust collaborations partnerships also grows. Healthcare is a team sport. I cannot emphasize this enough. We all must find ways to build collaborations that advance better health outcomes for patients around the world. Each new partnership is a new opportunity to learn, and we must not be afraid of failure. It is expensive to fail, but the lessons are worth it, particularly when regulators prevent technologies from entering the market before they fail and result in harm. Startups that build relationships with implementation partners, like health systems, patient advocacy groups and regulators along their development journey, not only help to avoid expensive regulatory failures, but as regulators are tasked with defining safety and efficacy standards, these collaborative relationships ultimately protect end users from the harms of bias and insufficient evidence. “By providing guidance to industry on what bias looks like and how to avoid it, regulators can have a transformative effect on the future of algorithms in many fields,” explained Emily Bembeneck, Rebecca Nissan and Ziad Obermeyer . The need for more transparency in technology development is not an effort to cripple competitive advantages, rather an effort to protect end users from the historic harms perpetuated by bias that permeates every facet of our society. This is particularly necessary in healthcare as clinicians rely on AI models, more and more, to make diagnostic and treatment decisions that can impact a patient’s life. I am not alone in this thinking, as researchers and regulators are becoming more aware of the challenges data bias is causing in the application of certain technologies. “In some cases, this will require more algorithmic transparency than certain ML techniques afford, as researchers push to understand the factors that drive ML algorithms to make specific diagnoses or treatment recommendations,” explained Ariel Dora Stern and W Nicholson Price . In many ways this can be benign, for example, when darker skin people have trouble using hand-washing stations , but they can also have detrimental consequences when inaccurately monitoring respiratory diseases , or suggesting patients have cancer . It is not an easy task to build a novel technology, trust me, I am speaking from first-hand experience. And while I am still on this journey, I have learned a lot from my team, from my partners and from our collective experiences with failure and success. Healthcare is a team sport.
.avif)
.avif)
.png)



