Rae Woods (00:02): From Advisory Board, we are bringing you a radio advisory. My name is Rachel Woods. You can call me Rae. We talk about equity a lot on this podcast and the systems that perpetuate it, but one thing we haven't talked about is artificial intelligence and how it can inadvertently make bias in healthcare worse. Microsoft's national director for artificial intelligence, Tom Lawry, recently released a book called Hacking Healthcare, which details lessons learned from AI's role during the pandemic and how to apply that knowledge to healthcare's other big challenges, including health equity. Today, I invited Tom to Radio Advisory to discuss how bias can creep into AI and why it's on all of us to ensure that AI is used as a force for good. Hey, Tom. Welcome to Radio Advisory. Tom Lawry (00:54): Hey, Rae. It's great to be here. Thanks for the invite. Rae Woods (00:57): Of course, and I have your latest book right here. I will admit, I listened to the first one because obviously I'm an audio person, but I read this one. And I wanted to ask you, what's the hardest thing about writing a book? Tom Lawry (01:10): Oh my goodness, the hardest thing is taking everything you think you know and committing it to something that attempts to tell a cogent, logical story. So the first time I was asked to write a book, I was actually approached by [inaudible 00:01:25] where they said, "We've got a gazillion books on AI on the technical side. We have nothing when it comes to what clinical and business leaders who run hospitals and health systems should know about." So I thought, "Well, I had written a bunch of articles, I'll just string those together." And when I finally said yes and I sat down to do it, I realized that it's a forced discipline, which is good because all of those things you kind of think you know, having to rationalize it and put it on a page ... It helps crystallize your thinking. Rae Woods (02:03): Well, I learned from reading both of your books that artificial intelligence has been around for a lot longer than I thought in healthcare, and I think that our listeners would be surprised to learn that too. I'm not sure that the average health leader really understands even how AI works or how often it is used today. So without getting too technical, what kind of basics do you want to make sure our listeners know about artificial intelligence? Tom Lawry (02:29): When you think about AI, it's gotten a lot of attention. I mean, frankly, it's been very much the shiny object in healthcare for the last few years. But I could tell you stories where it actually has its fruits going back to the 1840s, which, by the way, a woman, actually the daughter of poet Lord Byron, wrote the first algorithm. So as much as we think about this being a male-dominated industry, women drove most of the early progress. Rae Woods (02:55): Incredible. Tom Lawry (02:56): Anyway, so then the second part is simply the way I like everyone to think about AI is, first of all, we have to have a definition, so I'm going to give you one now. AI, simply put, is intelligence demonstrated by the software with the ability to depict or mimic human brain functions, and operative on "mimic," number one. But really look at it as the brain is such an awesome organ. It's what brought humans to the top of the food chain, and we've become so smart that we've actually started to look at how to outsource certain parts of our brain function to let machines do it, not all, but certain parts. So it's kind of about outsourcing some of what we as humans have been unique up until now in our capabilities. Rae Woods (03:40): And when it comes to healthcare, how do you explain the difference between what AI can do really well today and what its limitations still are? Tom Lawry (03:49): Yeah, I think that's one of the most important points that everyone should know, because we often hear about ... and it's actually in some of the clinical journals about how AI is going to take over radiology and everything else. And it's not. So simply put, AI is really good at certain things, better than humans. It's great at things like various analysis, pattern recognition, image analysis, information processing. On the other hand, humans are good at and will always be better than AI at things like reasoning, judgment, imagination, empathy, creativity, and problem solving. (04:21): So if you bring that together and you look at, for example, if you're a physician, if you're a nurse and you really evaluate the work you do, so much of it does involve things like information processing, but so much of diagnosis, the treatment recommendation, the whole care process involves all those other things unique to what humans do and what machines can't do now and, in my view, won't be able to do in our lifetime. So it's really about how you bring those two things together when it comes to creating value to have AI getting in behind those humans, those caregivers, anyone who's a knowledge worker, to say, "How do we help them be better at what they care about by using things AI's really good at?" Rae Woods (05:05): Now, if I'm honest, in healthcare, the truth is that we are battling against some pretty serious inertia, inertia that presumes that in healthcare, the presumption is white and male by default. The world is biased, healthcare included, and we know that algorithms are biased too. But how exactly is bias introduced into AI? Tom Lawry (05:31): Okay, I'm chuckling now, Rae. I'm going to push back on you just a little bit already. Rae Woods (05:36): Welcome. Tom Lawry (05:37): First of all, AI has the ability to become biased or to demonstrate patterns that we as humans would know as bias. So two things, not all AI is biased. Number two, there's an important distinction here. So again, this is a good time that you bring it up. When we talk about or we read about bias in things like algorithms used for clinical decision support, bias means there's a variance when something's being predicted between one group of patients or people and another. The idea, when you think of bias and AI, we're really referring to statistical variance of something that's happening with, say, an algorithm where its predictive capabilities are different for one population versus another. Rae Woods (06:26): Maybe give us an example of how that plays out in healthcare in an unintended way, because I do want to be clear that it is not that there are necessarily nefarious people that are building algorithms with the express purpose of making sure that one group is left out. But we know that the things that we build reflect the world around us, and natural biases can come through as we're designing, developing, deploying these kind of systems. Maybe give us an example of how that can play out in healthcare. Tom Lawry (06:54): Absolutely. Well, first of all, thank you for pointing out, to the best of my knowledge, that there is no Dr. Evil for AI out there doing nefarious things. So let me do a quick story. Everyone tuned in to Radio Advisory right now is now part of this new clinical AI team working in a hospital, and our goal is to create an algorithm that basically allows us, as we're putting patients in the hospital as inpatients, to follow them and risk rate the propensity to develop or experience an adverse event. So the idea is, today, we put 50 people in as inpatients. We're following their care. Some of them are just ... They're doing fine. It looks like we're going to discharge them, and all of a sudden, there's almost this random thing where they have an adverse event. Co-team comes in, stabilize them. They're now in ICU instead of going home. (07:43): So we're going to create an algorithm that follows those patients, predicts the ones that are highly likely have an adverse event, and then get our intensivists and others in there to do something. So we've created the algorithm. We've done that. We've run a pilot, and I'm proud to report we've been able to predict and reduce adverse events by 40%. So think about that for a second. 40% reduction, quality people are very happy. If those patients are fixed-pay patients, so they're Medicare on DRG and I can keep them out of ICU, lower the ICU usage, lower the length of stay, then the CFO's going to be very happy. So 40% reduction, we're improving quality. We're improving financial performance. (08:22): Here's what's important. 40% is a statistical average. If we look closely and I were to go into the board and give them that story, they'd be very happy. If I went to the next level and said, "But here's the deal. In getting that 40% as a statistical average, what we're seeing is the algorithm is three times better predicting and preventing adverse events in white males versus Hispanic females." Rae Woods (08:46): Ah, yes. Tom Lawry (08:47): And it meets all legal requirements. It meets all regulatory requirements such as HIPAA, GDPR in Europe, totally legal, totally compliant with everything, and yet is that okay? And so that would be an example of bias, but what it really is is an algorithm that produces good overall. But it produces it at a higher rate of value for one population versus another. So when we talk about bias, that is an example of statistical variation, and the question is, again, it's legal. It's compliant, but is it right? Rae Woods (09:22): I think we would all agree that it is not right, and the biggest fear that at least I have is that as the world moves towards more technological advancement, whether that's AI or otherwise, is that we actually make inequities worse and not better. And my understanding is that a lot of this can come from ... What's the data source that we're using? Again, is it unintended if we're using data from, say, wearable devices, and there are certain populations that aren't using those wearable devices or aren't getting genomic tests or things like this? Or what is the makeup of the team that's developing the algorithm? Do you have a diverse team and things like that? Tom Lawry (10:01): Yeah. Well, you've kind of answered the question already. Rae Woods (10:03): I did read the book. Tom Lawry (10:05): It's a great question, and no, when you boil it down or net it out, you're on the right track. So how does bias creep into AI is the question. So it creeps in one of several ways. Most often, it comes from things like the conscious or unconscious bias of the people developing those algorithms. It comes from the data that's being used to create the algorithm. And so let's pull back, and I love America. We have amazing caregivers, but in America, unlike other parts of the world, if you think healthcare is right legally, it's not. (10:39): So if you just look at how things happen in America, where people are going to the hospital, people who are insured ... and as a byproduct of those people going through the system, all of the data is coming from people who are well insured, people are on Medicare. But if there are others who are at the margins who don't have appropriate access to care, then the data alone is not going to be reflecting those populations. Rae Woods (11:05): That's right. Tom Lawry (11:05): So we have many ways of basically balancing dirty data, but if that's not done, then basically, the data that the algorithms are developed around are reflecting the patterns, the historical patterns from the past. Rae Woods (11:21): Exactly. Tom Lawry (11:22): So in America alone, just being aware of that and saying, "How do we fine-tune things to make sure that we're not only not perpetuating the bias that exists in the real world, but we're actually using these algorithms to try and start mitigating some of the biases that are happening in the real world today?" So the challenge for everyone to think about is simply let's own, and you said it. There are many biases and inequities happening in the real world and the physical world of healthcare today. What we want to do is be very aware of and make sure they don't cross over to the digital world in the form of algorithms. Rae Woods (12:01): That's exactly right. Tom Lawry (12:03): I mean, to me, it's kind of like squeezing a balloon. If you're working on diversity, equity, and inclusion in the physical realm and you're making progress, good for you. But if you're not paying attention to the other, you're squeezing that balloon, and it's popping up somewhere else. Rae Woods (12:16): Yeah, exactly. Now I want to kind of touch on what might be a fear of some of our listers, and it's this idea that artificial intelligence can learn. Now, I saw some recent reporting that came out of MIT that showed that AI systems, the AI system that they were looking at in particular, could take images, even incredibly grainy images with no racial identifiers, nothing even like zip code or anything like that, and it could determine the patient's race from the image alone, even when they were very bad images. Do we actually know how systems learn to be biased or learn to incorporate race when we don't necessarily intend them to? Tom Lawry (13:01): Yeah, we could take an entire show to answer that one. So in a nutshell, another way in which bias creeps into artificial intelligence is through what's known as continuous learning algorithms, which are the most powerful when done right. So essentially, humans program this, and so for every piece of data that's running through it or every consumer or every patient that's going through it, it's learning more about spotting patterns. If it's programmed to then not only learn patterns but try and get to a certain point as far as improving predictive capabilities, many times, they almost take on a life of their own, where once you set them in the wild and they're going through all this data, it wasn't an intended consequence. (13:49): But as a means of saying, "How do I get the prediction to be the best ever when it comes to picking which treatment option for that woman with a malignant breast lump is going to be the best for her?" that sometimes, those continuous learning algorithms produce unintended consequences, which is in part why the FDA ... If you want to see the head explode on an FDA official, talk to them about a continuous learning algorithm in a medical device. Rae Woods (14:18): I mean, I work with physicians. I already see their heads exploding when I talk about these kinds of things. We'll be right back with more Radio Advisory after this short break. (15:34): You mentioned something that is really, really important, and I don't want to lose it. And it is that even though bias creeps into these systems, that is not a reason to not use artificial intelligence. It is not a reason to slow down. Instead, we need to think about how do we embed ... Maybe it's more anti-racist principles into these systems so that they can produce good for all and not just for some. And I know that you've written about five ethical principles that leaders should follow, because there isn't guiding rules for everyone, and I think these are based off of Microsoft's responsible AI principles. What are those five ethical principles? Tom Lawry (16:13): Well, first of all, thank you for pointing that out because for all of the talk and all of the things being written about algorithms and AI and the potential to cause harm because of bias, number one, I'd say for some of the people who say, "Well, maybe we should slow down or stop using AI until we figure it out," I would turn that rationale or logic around to say, "If you apply that to AI, then let's start applying it to the real world where Institute of Medicine many years ago came out with this report of uneven treatment that clearly shows the current system has lots of biases that compromise or marginalize certain population, people of color, women. So let's first acknowledge that we have that problem in the real world." Rae Woods (17:02): Yeah, absolutely. Tom Lawry (17:03): The beauty of algorithms or AI is simply they are programmed by people. They're basically mathematical principles applied to the real world. So the idea is I would much rather figure out how to mitigate or eliminate bias in an algorithm, much easier than trying to eliminate the conscious or unconscious bias in humans. Rae Woods (17:23): Yeah, that's actually a really good point. That's actually a really good point that I think people miss. Tom Lawry (17:28): But I think that's the jump spot, Rae, is say, "Look, I believe, properly applied, algorithms will actually help reduce the bias, reduce the inequities in the real world," because if we're running these systems and they are clinically shown, mathematically shown to be of low variance ... Again, bias means variance between that white male and Hispanic female, that whatever ... we have the ability to start applying those things at scale to things like public health, to things like all those things we do now that would actually, I believe, help reduce some of the biases and equities that are occurring today in the real world that frankly we don't like to talk about. Rae Woods (18:13): Yeah. Tom Lawry (18:13): I mean, I'll give you one. I'm going to talk about one just because it's a big deal. So a perfect example is maternal mortality rates in America. Our maternal mortality rates in America are the worst of any developed country in the world. Number one, a woman is three times more likely to die in America of maternal mortality, or African American women are. So the last time I checked, maternal mortality rates for black women are worse today than they were in 1914 when we started keeping track of data. Rae Woods (18:45): Unacceptable. Tom Lawry (18:46): I think everyone would agree that 1914 probably wasn't the zenith point for championing women's rights or people of color. So it's like, I look at that one thing alone. I look at ... There's some great programs where AI's being used through a program called PowerMom and Dr. Eric Topol. So we actually have the ability to use AI to start solving for some of these equity or bias problems. So, okay, I'll get off my soapbox. I think that was longer than the answer was supposed to go. Rae Woods (19:14): I like the soapbox. We like the soapbox here. It's okay. So if we are going to use AI responsibly in an effort to reduce bias and to reduce racism in healthcare ... Let's just be honest about what we're talking about ... what are the principles that you want leaders to follow? Tom Lawry (19:32): There are many at Microsoft, and I just want to do a shout-out. We've had an office of responsible AI for many years, long before this is a popular topic. We've developed a set of principles that have guided Microsoft development and launch of software for many years. In the last few years, we've taken those same principles, and we're now using them to help our customers and to help industry like healthcare understand and apply the same thing. So anyone who's interested, any search engine, just plug in "Microsoft responsible AI." You see not only what we do, but we have a whole set of tools that can be used by others. Rae Woods (20:08): So the five kind of core principles that, like you said, are developed and adapted from Microsoft's responsible AI principles are that intelligent systems must be fair. They must be reliable and safe. They must be private and secure. They must be inclusive, and they have to be transparent and accountable. Now, that's a lot, so let's kind of break down some of those mean. What does it mean for AI to treat everyone fairly? Tom Lawry (20:37): If AI is being used, fair means there is no variance between the predictive capabilities and outcomes of any patient that's being run through that system. In the real world, eliminating all variance is a statistical improbability, but having systems that warrant or guarantee there is very small, negligible variance in the outcomes of AI across all patients is what everyone should be focused on. Rae Woods (21:10): Reliability is an interesting one because reliability is talked about in healthcare quite often, and we typically think about reliable care as care that is consistent and care that is safe. We talk about this in surgery. We talk about this in emergency departments. We talk about this all of the time when it comes to care delivery. But my question is, how do we create consistent results if, again, these algorithms aren't necessarily designed, tested, deployed against a diverse set of people? Tom Lawry (21:43): One, they should be. Let's just put it out there. Rae Woods (21:46): Yeah, of course. Tom Lawry (21:48): Much of what we're seeing, when it comes to bias creeping into the digital world in the form of algorithms, comes from the enthusiasm of people to do good, and they can, in fact, do good by putting algorithms in the wild. We talked about that earlier with the one example. But if it's not been stress test to say, "Okay, it's producing good. Now let's stress test that against all these variables to say in the field it's going to produce good across every population, every patient served" ... and again, the number of times I've seen data science teams gleeful because they've created something and they're putting it out there and you probe on how they stress test it, there are times where in their zeal to get it out there, they haven't. And this is where simple question ... If you're a clinical leader, if you're in the C-suite, you don't have to be an expert at AI, but as you're seeing AI become more pervasive in your organization, every time you see it, ask that simple question. Tell me how this has been stress test for bias. Rae Woods (22:50): Ooh, I love that. That's such a simple question that anyone can ask. Tom Lawry (22:54): Well, again, so much of this is simple, and it gets back to what we kind of talked about earlier about the responsibility, what I call the leadership imperative in healthcare. Rae Woods (23:01): Privacy and security is one that honestly gives me a little bit of anxiety. And it's because we know that some groups ... I'm thinking particularly Black and Hispanic groups ... have extremely valid reasons not to trust medical systems. And we know that we need a diverse and inclusive set of data to be input to these algorithms so that we can reduce bias in the output. I guess my question is, how can we ensure that these groups feel that their data is protected, given how important it is to have a lot of data and to have that data be diverse? Tom Lawry (23:36): Well, Rae, first of all, that's not an AI issue. That's a general healthcare data issue. I think you would agree. Again, AI is simply leveraging and making best use of all of this data we have, all of the data that we're cranking out every day as we're sitting here talking, to create higher value, to create greater equity, to create greater health status. So the privacy and security and people not trusting data has nothing to do with AI. It has everything to do with systems, number one. (24:07): Number two, I think we do need to be careful because if someone wants to be nefarious with AI ... I mean, I've seen studies where for all the things, for example, here in America with HIPAA and the privacy protections of how you de-identify data ... I mean, there are ways in which AI can reverse engineer that stuff today to basically get around HIPAA security standards, which, again, I'm not suggesting people are doing that. I'm just saying that as we become more sophisticated in our technology, we have to make sure privacy, security standards, regulations, and laws keep up or catch up. Rae Woods (24:43): That's right, or even the data governance is something that is prioritized among organizations themselves. I had a colleague describe this to me as the cauliflower of technology because everybody knows that it's good for you, but nobody's excited to see it. Tom Lawry (24:58): Okay, Rae, I'm going to jump in. You got me all excited now. No, so I think you've hit on such an important point. So the topic of today is AI and bias, but it's AI. And in my role and as an author, I get asked to do a ton of keynote speeches on AI at conferences. I've never been asked to do a keynote on creating and managing a modern data estate, which includes data governance. And yet that is at the heart of whether you're going to drive value in AI, whether it's going to be fair and responsible and fit up to the mission statement that's probably hanging in people's walls today. (25:35): So it's all about that data estate. Most of the data people who work in healthcare get that, and there are times where people glaze over when you start saying, "Let's talk about data governance. Let's talk about data standards to allow us to make the best use of data as an asset." And instead, we love talking about AI, but at the heart of AI, it needs, feeds, and thrives on data. So any health system that gets their data estate right, including governance, are going to be the winners going forward. Rae Woods (26:05): We started off this conversation talking about what AI can do well and where AI has natural limitations. One of your principles is about inclusive AI. So how can algorithms, how can technology understand the context, the needs, the expectations of people across dimensions of their identity? Tom Lawry (26:27): Inclusiveness means you're designing from the beginning to accommodate all users, all situations imaginable as you set out to design and deploy AI. So some of that is ... It starts with your design principles. It starts with what you're trying to accomplish. But then if we drill in back to how does bias creep in, I mean, one of the things that has been seen is oftentimes bias creeps in when you're serving a diverse population, but the team creating and deploying the algorithm are not diverse, which then takes us down to a deeper hole. For example, if we're going to solve for that ... The latest data I saw on PhD graduates for data science in America, only 4% were Black or Hispanic. So we could have another whole conversation on, to solve this, well, let's talk about what's happening in the education system today, but that's probably another episode. Rae Woods (27:24): Another episode. The last two pieces that I want to touch on are transparency and accountability. And I actually want to talk about these things separately. One of my aha moments when reading your book is that you wrote that transparency is not just about how AI systems display results. It's about teaching healthcare users how to interrogate those results. What does that look like in practice? Tom Lawry (27:50): Well, what that looks like in practice is the ability to take an algorithm, have it be explainable from the beginning. As it is put into practice and being used, you have the ability to hit pause, pop the hood and say, "Let's look at, one, what's happening, how it's doing things." Second, the ability ... and I'm not saying we're totally there yet, but the ability for anyone without programming skills to interrogate how it's doing it and what it's doing. And this is where a key principle is simply AI, no matter what form it takes, is meant to come in and behind the doctors, the nurses, the knowledge workers to support what they do, not supplant or replace, because at the end of the day, it really is that caregiver who has to use all those skills we talked about earlier to make the best decision. And there's actually a fear going forward when you look at, say, medical students coming out today or in the future that we'll get to the point where they haven't learned certain things, and they are overreliant on the machine intelligence that we're capable of today and in the future. Rae Woods (29:03): Yeah, that's definitely a pushback that I've heard quite a bit from physicians. The other kind of fear, since you brought up fear, is frankly the fear of getting it wrong. And this is true when it comes to talking about anti-racist principles, period, let alone in healthcare, let alone in artificial intelligence. But there's this concern that leaders will inevitably get it wrong, and they probably will. And the fear is that they're just going to be raked over the coals when those mistakes happen. So how do you balance the need for accountability, which is one of those principles, without making people so afraid of that accountability that maybe they're not willing to try or that they don't want to be transparent about the results? It's a very complex problem. We're not going to solve racism in healthcare in one conversation. Tom Lawry (29:53): If you look at any clinical literature on the valid use of AI, it's designed to take the best of what AI can do, like pattern recognition, and provide human clinicians with more information around which they are accountable for making good decisions. The batting average of any neurosurgeon, if they're honest ... We're human, and there are times where mistakes are made. The goal of AI is to actually improve the batting average of any clinician but still having them in total control. (30:31): One quick thing, this whole thing comes down to the issue of correlation versus causation. I can take a lot of data ... and I have ... and use tools, and I've come up with the fact that there's almost a perfect correlation between per capita cheese consumption and the number of people who die each year in America by becoming tangled in their bedsheets. So if you only rely on AI, it's really good at correlating, but I think everyone knows that stepping away from the cheese plate tonight is not going to save lives. What matters- Rae Woods (31:02): At least not from bedsheets. Tom Lawry (31:03): But what matters is clinicians understanding the capabilities of the tools, having systems in which they operate, including clinical and operational work processes that are, again, the responsibility of leadership in an organization, to have them embracing these tools, to have them feeling like they're in full control. Value comes at scale in using AI by each individual physician and nurse getting to that point where they're embracing it. They have certain skills in how to use AI, but all it should be doing is enhancing and leveraging the skills, the wisdom, and experience they already have. Rae Woods (31:46): Tom, I want to end with giving you a moment to speak to executives about technology and about equity more broadly, especially those executives who, over the last two and a half years, have made very strong commitments to advancing health equity. My question is, what is the role of AI in these larger industry efforts? Tom Lawry (32:09): Well, first of all, I think no matter what you read, strip away the hype. AI is already coming into healthcare. We're all early in the journey. Let's put that out there first. What we'll be doing a couple years from now ... We'll almost chuckle at what we're doing today, So as that happens, you got good data science teams. You got good clinical informaticists. And a lot of studies from groups like Boston Consulting show driving value at scale across an organization like a health system is all about what I call the leadership imperative. We've touched on some of that. As part of that, it's also the leadership imperative as any technical system, particularly AIs coming in, that you're asking the right question, setting the right standards to, at a minimum, make sure that those things are not taking in the wrong direction of diminishing your diversity, inclusion, and equity initiatives. (33:00): And let me leave you with a story. So not too long ago ... I'm not going to mention names because everyone would know the name, but there was a national health payer that made the headline of The Wall Street Journal where the headline was something like, "New York Attorney General investigates this health payer for racism through an algorithm." So I can only imagine when someone picked up the paper that morning, they weren't happy. I'm going to guess that the data science team was a great team. They had good intent. They really were trying to do good. But again, I'm sure it was legal. I'm sure it was regulatorily compliant. (33:38): But somehow, leaders failed to do what leaders should do, which is make sure those things are not happening or minimize the ability for that to happen. So I want you to imagine if you woke up tomorrow for all the algorithms running in your health system today and you were on the front page of The Wall Street Journal being accused of racism, how that would affect the reputation of your organization, how that would affect the reputation of you as a clinical or business leader. (34:05): And I don't want to end on a negative, but it's to say there are so many ways in which leaders can be involved to actually have AI be a positive force in what they're doing to improve equity. Anyone who is in a health system listening today that you have a diversity, equity, and inclusion plan, just go back and ask that question. Are we folding in AI into what we're planning, what we're thinking, what we're doing? That would be an interesting takeaway. Rae Woods (34:31): Yeah. Well, Tom, thank you so much for coming on Radio Advisory. Tom Lawry (34:36): My pleasure. Thanks so much for having me. Rae Woods (34:42): To close out our show today, I actually want to read you a small excerpt from Tom's book. It's actually in the first couple of pages. He writes that "AI is not the answer to fixing healthcare. Humans are. We were smart enough to create artificial intelligence. Now we must harness it for the good it can do in health and in medicine." And remember, as always, we're here to help. (35:13): You can find Tom's latest book, Hacking Healthcare, at your local bookstore, and if you want to learn more about how leaders can make their organizations more equitable, I recommend going back and listening to episode 103, which is titled What an Equitable Organization Looks Like and How Yours Can Get There. Plus, on our website, we have entire playlists dedicated to things like technology and equity. (35:35): If you like Radio Advisory, please share it with your networks. Subscribe wherever you get your podcasts, and leave us a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Katy Anderson and Kristin Myers. The episode was edited by Dan Tayag with technical support provided by Chris Phelps and Joe Shrum. Additional support was provided by Carson Sisk, Leanne Elston, Alice Lee, Nicole Addy, John League, and Solomon Banjo. Thanks for listening.