Responsible AI an interview with Toju Duke
Responsible AI an interview with Toju Duke
In this interview, Toju Duke, Responsible AI Adviser and Founder of Diverse AI (London, UK), discusses the importance of responsibly developing and deploying AI technologies, and their potential to transform many lives and businesses. Toju explains the most common AI biases and how this could impact standards of healthcare, as well as several recommendations to consider for the responsible development and deployment of AI in healthcare.
Please could you provide a brief overview of your career to date?
I started my career as a secretary for a non-profit organization after completing a bachelor's degree in Sociology and Political Science in Nigeria. It took about 1 year to realize administrative work was not for me, and I transitioned to a different department in the same organization as a digital media editor which I thoroughly enjoyed and stayed on for a few years. Fast forward a few years later, I moved to the UK to study for a Master’s degree which kicked off my career in Marketing. I worked in marketing for a couple of years before joining Google as a specialist on its advertising products. During my 10 year tenure in Google, I navigated several roles ranging from being a specialist across a couple of Google’s advertising products, to the EMEA product lead for Google Travel. After my introduction to machine learning/AI a few years ago, I decided to explore a career in the field with a focus on Responsible AI, which led to my transition from Google’s sales organization to Google Research where I worked as a program manager for 2 years. Given the lack of diversity in technology, especially in AI, and my passion to help bridge this gap, I joined Women in AI, a non-profit organization where I led the UK chapter for a year after being a manager on the Irish chapter from 2020 to 2021. After leading the UK team for a year, I decided to start a non-profit membership organization called “Diverse AI” which is geared towards driving further diversity in the field of AI by supporting, championing and building diverse communities through collaborations, education and research. Due to my passion for the responsible development and deployment of AI technologies, considering their magnitude and potential to transform many lives and businesses, I became a speaker on the topic 3 years ago, and provide regular keynotes, panel discussions, podcasts and articles on the importance of Responsible AI and Responsible AI frameworks. This led to my book - “Building Responsible AI Algorithms”, which will be released in August and will be available across bookstores worldwide . I have recently left Google to pursue my thought-leadership, speaking and advisory career on Responsible AI as I strongly believe AI is no longer the future, but the most profound technology today and it is important we ensure we lay a proper and responsible foundation for future technologies.
Could you explain why it is important to develop and implement Responsible AI frameworks?
While AI is hugely beneficial and can solve numerous global problems such as climate change, healthcare, wildlife conservation and so on; as with any new and nascent technology, it comes with a number of risks and harms that cannot be ignored. For example, most AI applications are prone to perpetuate and reinforce social biases based on the datasets they were trained on. Some also violate people’s privacy and human rights due to the lack of accurate, representative and inclusive datasets, and the training methods used to develop the AI models. They also have the ability to affect people’s psychological safety, and could sometimes lead to loss of life. Let us take a look at the most prevalent and used AI applications today – chatbots that are powered by large language models such as OpenAI’s ChatGPT, or Google’s Bard, amongst so many other applications. These AI applications have the tendency to produce incorrect information with a high amount of confidence. Here is a recent example of this – a lawyer used ChatGPT to file for one of his cases, but the chatbot provided fake cases that never existed, and the lawyer, unaware of these fallacies in large language models, submitted these fake cases as part of the court case . We have had another recent and sad case, where a man in Belgium took his own life based on the advice from a chatbot which encouraged him to take his life in order to solve climate change. I can cite so many other examples, from false arrests, incorrect recidivism (the prediction of a convicted criminal to reoffend), false diagnoses, and so on, which unfortunately could lead to a further divide in social equity, socio-economic statuses, political divisions, concentration of power, violation of privacy and human rights etc. It is therefore of paramount importance to consider Responsible AI frameworks when developing AI technologies or applying these technologies to any sector, industry or business. In my book, I discussed the need for Responsible AI frameworks, breaking it down across the following areas:
- Responsibility: Having a sense of responsibility and enabling a blameless post-mortem culture in an organization is the first step towards building a Responsible AI Framework. Understanding the different parts you have to play during the development and deployment of AI technologies if working in the field, is vital, while being cognizant of the potential risks and harms existent in AI systems.
 - AI Principles: AI principles, which address AI governance and help an organization define which AI applications their businesses will work on, versus applications they should avoid. Defining an organization’s AI principles is the first step towards building a Responsible AI framework.
 - Data: As data is the bedrock for AI technologies, it is important to ensure data ethics are put in place when working with AI systems. These include data curation methods such as data accuracy, data quality, application to relevant laws including the consideration of copyright and intellectual property, and so on.
 - Fairness: Algorithmic fairness, a field of research designed to understand and correct algorithmic biases that are prevalent in AI technologies is another fundamental part of a Responsible AI framework, where a thorough understanding of the different fairness metrics and protected attributes, as well as how to test a model for these metrics, forms part of a responsible AI workflow.
 - Safety: Physical and psychological safety should be considered when developing AI models as there are inherent risks prevalent in most AI applications. For example, running benchmarks to test for safety or conducting adversarial tests, helps to detect unsafe outputs which could be used to develop guardrails and reduce potential harm.
 - Human-in-the-Loop: Working with a diverse group of safety annotators who help to label the datasets adhering to any required ethical / responsible policies is a key part of a responsible AI framework. Reinforcement Learning through Human Feedback (RLHF), where reinforcement learning techniques (part of the machine learning training process) are combined with human guidance to improve the overall learning process of an AI model are new techniques in the field that could be considered. Ensuring humans are involved in the development of AI technologies is a key part of the Responsible AI framework.
 - Explainability: It’s a well-known fact that one of the issues facing AI systems is the lack of transparency existent in AI models, where these technologies are regarded as “black boxes” because it is impossible to decipher their decision making process and how they arrive at their decisions. Documenting and registering datasets and models is one way to help address this issue.
 - Privacy: As AI applications / products – especially large-scale models and applications such as generative AI – are built on large amounts of data pulled from different sources of the internet, it is important that people’s data are protected through the utilization of privacy preserving methods such as differential privacy, which is a recommendable method that protects the privacy of an individual’s data existent in datasets.
 - Robustness: AI systems are quite vulnerable to malicious acts and bad actors so it is crucial to take the necessary steps to protect these models from perpetrators. For example, by using methods like transfer learning, which improves the robustness of a model, and adversarial training, which makes models more resilient against external attacks designed to corrupt the data in the model(s).
 - AI Ethics: AI models have a lot of areas where AI ethics and its considerations need to be taken into account, such as the huge amounts of energy consumption large scale models use, negatively contributing to climate change, or the extremely low rates paid to data workers known as labelers or annotators, while considering their mental health, as most of them suffer from PTSD due to excessive exposure to toxic content, to mention just a few negative impacts.
 
What are the most common AI biases, and how do you think this could impact standards of healthcare?
AI Biases can be categorized into two different areas - algorithmic bias and societal bias . Algorithmic bias, also known as “data bias” refers to existing biases in the datasets that algorithms are trained on, while societal bias reflects the conscious and unconscious biases, and norms and assumptions that various members of society have. Let us focus on the former. According to the Centre for Critical Race and Digital Studies, affiliated with New York University (NY, USA), biases could be categorized into the following: historical bias, representation bias, measurement bias, aggregation bias, evaluation bias and deployment bias . These biases range from a mix up between understanding the world as it is and values embedded in the AI model, as is the case with historical bias. For example, the ongoing gender pay gap issue that historically reflects financial inequality faced by women and is still quite prevalent across many societies in the world. Historical bias was quite apparent in the 2019 Apple card which showed the credit rating was biased towards women where women received less credit than their partners even though both parties had the same income and credit score. Sometimes, the population a model should be trained on is underrepresented, which means it could be biased or skewed towards a certain population. Take the popular image search for “CEO” which always comes up with white male results as opposed to other genders and demographics. This falls under representation bias. Incorrect labelling of the datasets, inappropriate combination of populations, disproportionate benchmarks in datasets or misinterpretation of a system’s use post deployment, are reflected across measurement, aggregations, evaluation and deployment bias. A recent form of bias that is being discussed among research communities and one that is prevalent in large language models, is latent persuasion, where Jakesch et al. conducted a study proving that chatbots built on large language models can influence how users write and think . This is quite crucial to society as malicious actors could perpetrate these models and spread further misinformation, which could impact politics/elections, posing a risk to democracy as an example. Producing results and outputs that are biased and incorrect is a risk on all fronts to users interacting with these sorts of chatbots built on large-scale models. The biases mentioned above are privy to all AI models, including those built for healthcare purposes. For example, following the COVID-19 pandemic, the Journal of American Medical Informatics Association wrote an article stating that AI provides a solution on clinical decision-making for diseases but could also exacerbate existing health disparities . The approach to avoiding biases in machine learning and AI systems is a holistic approach that should be applied to all industries, including healthcare, as the issues many times arise from the types of datasets used, how they are selected, grouped and so on.
What are some of the best use cases of Responsible AI in healthcare?
I think Responsible AI is still quite new and nascent, I am not aware of any use cases of responsible AI in healthcare. For example, the recent breakthroughs of Deepmind on Alphafold or the radiology study did not need to include responsible AI at these stages. Also, the recent AI clinical trials are still very much in trial mode so there is nothing to report on that. I will change the question to “best recommendations of responsible AI in healthcare”, which I have added a few lines on below: A few recommendations to consider for the development and deployment of AI in healthcare are the following:
- As medical data is affected by different types of variability, such as biological variability, which is sometimes unaccounted for during the training phase of AI models since a patient’s values on a health condition may change over time. It is important to understand the various sources of uncertainty in biological and clinical data which could affect a patient’s diagnosis, where symptoms may be diagnosed as mild when severe given the interpretation of the exam, which might not have a comprehensive view due to any changes that were unaccounted for. This problem is currently being researched by a group of researchers from the University of Milano-Bicocca in Milan, Italy .
 - To address data collection, processing and transparency issues that relate to misguided and inscrutable evidence, it is important to use suitably labeled datasets which could be accessed in the CLEF eHealth Evaluation Lab as an example.
 - Human-centric AI is critical to the success of any AI technology and also includes healthcare. It is important to build mechanisms that have user involvement in the design and application of healthcare AI solutions and involve healthcare professionals in the “human-in-the-loop” process. A good understanding of patient experiences and the psychological impact of AI powered technologies should also be considered in the development and deployment of AI systems in the industry to inform professionals on future designs and improvements.
 - The consideration of inclusive AI should also form a part of responsible AI in the industry where further investigation and enablement of inclusive healthcare services using AI are taken into consideration. Assessment of barriers to AI adoption amongst diverse populations and underrepresented groups should be carried out, including the development of strategies to address these barriers.
 - The use of diverse and inclusive datasets that represent samples equal to the world’s populations should be utilized during model training. This will help to address potential sources of bias in AI healthcare algorithms. Evaluations on health equity and how AI solutions could address health disparities should also be carried out. Lastly, ongoing analysis and extensive testing of AI applications to minimize potential biases and safety issues should be conducted .
 
Interviewee profile:

With over 18 years experience spanning across advertising, retail, not-for profit and tech, Toju is a popular speaker, author, thought leader and adviser on Responsible AI. Toju worked at Google for 10 years where she spent the last couple of years as a Programme Manager on Responsible AI leading various Responsible AI programmes across Google’s product and research teams with a primary focus on large-scale models and Responsible AI processes. Prior to her time spent on Google’s research organisation, Toju was the EMEA product lead for Google Travel and worked as a specialist across Google Travel and Shopping. She is also the founder of Diverse AI, a community interest organisation with a mission to support and champion underrepresented groups to build a diverse and inclusive AI future. She provides consultation and advice on Responsible AI practices worldwide. Toju’s book, “Building Responsible AI Algorithms” will be released in August. Disclaimers:
The opinions expressed in this feature are those of the interviewee/author and do not necessarily reflect the views of Future Medicine AI Hub or Future Science Group.
.avif)
.avif)
.png)



