AI Summit 2023: an interview with Nancy Morgan
AI Summit 2023: an interview with Nancy Morgan
In this interview we speak with Nancy Morgan (Ellis Morgan Enterprises LLC; DC, USA) regarding AI biases and data policy. As a retired US government Intelligence Officer for the CIA, and the former US Intelligence Community Chief Data Officer, Nancy offers her insights into AI strategy and legislation guidance and how to overcome these potential biases. This interview took place at The AI Summit London 2023 (14–15 June, London, UK).
Firstly, please could you introduce yourself and provide a brief overview of your career to date?
My name is Nancy Morgan and I am a long-time US government employee. I just retired after 30 plus years in the US government, as a long time Intelligence Officer working for the CIA. And in my final tour, I was the Intelligence Community's Chief Data Officer (IC CDO), working with all 18 departments and agencies that make up the US Intelligence Community.
I had a long career in data and technology, but I did not start out that way. I started out more as an end user, and then grew into project and program management roles, and data and policy roles, because 30 years ago there was not such a thing as a chief data officer so you did not aspire to be a CDO. I have done a lot of different things in a lot of different domain areas in intelligence, and I have also done a lot of things in data. I am also a big proponent of women in STEM and women in STEAM and needing to get more women and diversity into the field.
You spoke just now at a talk discussing biases in AI. What are some key takeaways from the talk?
The main takeaway is that it is really important to have everyone at the table. We need more diverse and inclusive teams: conceptualizing capabilities, building them, designing them, testing them and validating them, and we also need diversity of thought. This is not just diversity of race and gender and background, but we need diversity of ideas and diversity of life experience. If I think about healthcare in the medical realm, and I think about devices or equipment for children versus adults, or men versus women, there are things that are different or unique about each physiology that need to be considered and incorporated right from the beginning, not as an afterthought.
Why is it that the US women astronauts are finally getting uniforms that are custom designed for female astronauts after this many years?
In your opinion, what are some of the biggest barriers that companies face when adopting AI?
Let us dig into that a little bit. So I think some of the barriers include first having enough people, and having enough people with the skills we need, and also getting more people into the field. I would say this to everyone in your audience: everyone needs to keep updating their skills. That does not mean everyone needs to be a person who can spin up cloud instances and write algorithms, but we need everyone to understand the power, the potential, and some of the peril that comes with these new technologies like AI, so that we can design really robust capabilities.
The second thing, and I just spent time talking about this on a panel, is bias. We really need to work on preventing bias, and it is not a once and done job. It is an ongoing responsibility to address the potential for bias. We can have model drift, we can have data that is incomplete, and in my world we can have even more nefarious things with adversarial AI or poison data, so it is really important to keep up when preventing and testing for bias.
In addition, we might learn that our data sets are incomplete or that our models need some additional work. We might have breakthroughs in training. We need to be able to update those models in data analytics. I think of it as sort of a data and model and analytics supply chain problem that we need to deal with. I think about whether the models are fully explainable, if we have repeated performance, if they are trusted, and if they are legally and ethically compliant. That is a really big area.
I think organizations are facing some liability if they are not really digging in on this. Part of the way to deal with that is having some of the key partners at the table that we will talk about later on. The explainability part means the trusted nature of not just your data or your analytics or your models, but all of it. All of it and how it works together is really important.
The White House has delivered an AI Bill of Rights in the US to help shape and drive the conversation. They are working on an AI strategy, and they are working on legislation to try and give guidance, but I do not know that we can come up with a single document that handles everything because there are differences across certain sectors. I think that in the medical and healthcare field, you are going to find that there are issues related to patient data, there are issues related to clinical care, there are issues related to medical research, and all of those are important. They might need some different treatment or different considerations that need to be dealt with, but they are all important. I also think that privacy and handling of personally identifiable information is a huge challenge, not just in the medical field, but in every field, and that is really important for us.
So how do we address some of these challenges? I do not like to just talk about the challenges, but also how we can overcome them. For one, I think there is an aspect of shared responsibility for all of us involved in these programs. Whether we are producers of capabilities and data, whether we are consumers, or whether we are partners on it, we are still all responsible in how we think about it and how we use it. So that is the personal accountability aspect.
There is also leadership accountability. Someone who has been operating at a leadership level should clearly state what the expectations and requirements are to their workforce, to their teams and to their partners. I think we have really all got to address these dimensions. We have already talked about diverse teams and how essential they are to our success. And really, this diversity is an imperative for us designing the best capabilities, the right capabilities and the most inclusive capabilities.
As a CDO, it would not be proper if I did not discuss some foundational things. While it is great to talk about all of the new technology like AI and generative AI, there is still foundational work that needs to be done in terms of our data and data life cycle management, data governance, metadata, data tagging, and all of those foundational things. What I hope is that the new advanced technologies will make some of this foundational work a little bit easier to tackle, but it is still really important to address these issues.
My other strategy for success is partnerships. You cannot do it alone. Do not try to do it alone. Seek out and find other people. Do peer reviews of your work. Software developers do it. People do it for writing papers. We need to get in the habit of thinking about who else is involved, who is in your ecosystem, who else brings something to the table and who else can give you good insights. It is also important to check for bias, check the results and check the outcomes.
Finally, I have one more thought for you. One of the reasons why I am so passionate about what data and technology can do in this space is because I have a niece whose daughter has a rare disease. Because it is a rare disease there is not a lot of research surrounding it. I see the power and possibility through a foundation they work with to combine data in new ways, to apply AI technology, to look at other families of diseases, and to say 'what does the data tell us? Are there treatments and protocols that might make sense in this space and hopefully lead to a breakthrough and work towards better treatment?' That is a very personal passion of mine.
Interviewee profile:

Nancy Morgan is the CEO of Ellis Morgan Enterprises LLC (DC, USA) and the former US Government Intelligence Community Chief Data Officer (IC CDO) (DC, USA). She was dual-hatted as the Assistant Director of National Intelligence for Domestic Engagement, Information Sharing and Data in her final tour. She has 30+ years of experience in National Security leading strategy, innovation and driving transformation in the data and technology arenas at the Central Intelligence Agency and in the US Intelligence Community.
She has extensive experience leading major corporate transformation efforts and proven experience standing up new organizations across portfolios for data management, data literacy/data acumen, digital transformation, software development and cloud technology adoption/migration in the national security and intelligence arenas. She is a champion for women in STEM/STEAM.
She now serves as a Strategic Advisor on SambaNova Systems’ Artificial Intelligence Innovation Advisory Council (CA, USA) and is an active member of Women Leaders in Data & AI (WLDA). She is a public speaker on topics related to data, responsible use of AI, technology, innovation and women in STEM.
She has a Masters of Science in Information Systems from The American University (DC, USA) and a Bachelor of Arts in International Relations and French from Colgate University (NY, USA). She is based in the Washington, DC area.
.avif)
.avif)
.png)



