Rae Woods (00:02): From Advisory Board, we are bringing you a radio advisory, your weekly download on how to untangle healthcare's most pressing challenges. My name is Rachel Woods. You can call me Rae. Last week we released an episode about why technology is a non-negotiable for alleviating the workforce crisis. And we talked about a lot of different technologies, but we didn't talk very much about AI and that's for good reason. There is a ton of buzz around AI, a lot of fear, and frankly a lot of opportunities and opportunity beyond just supporting the workforce. And I felt that it just deserved its own conversation. So today I'm bringing back fan favorite guest Tom Lawry. Tom is the former head of AI for Microsoft. He's now the managing director of Second Century Tech. (00:49): This is an AI transformation consultancy that specifically intends to help health leaders understand and prepare for AI in healthcare. He's also a bestselling author. I am a huge fan of both of his books, including his latest Hacking Healthcare, and he was recently named one of the top voices in AI to watch. Perhaps his biggest accolade is that last year Tom was part of an award-winning radio advisory episode all about bias and AI, and that was all before ChatGPT and generative AI entered the public arena. So today I'm thrilled to have Tom back on the pod. Tom, welcome to Radio Advisory. Tom Lawry (01:31): Rachel, I'm so excited to be with you again. This is awesome. Rae Woods (01:35): You are quite literally the only time we have ever brought in an external guest or someone who doesn't work at Advisory Board a second time to radio advisory. So that speaks volumes to just how important I think you are in helping the industry understand and make sense of artificial intelligence today. Tom Lawry (01:54): Well boy, I feel really special now. I'm going to take that. I'm going to have a glow all afternoon. Rae Woods (02:01): Well, before we start our conversation, I want to admit to you, my friend, that I have a very real frustration in the market right now. Everyone in the world is talking about AI, but there's also a ton of misinformation going around. In fact, you and I actually met in person at HIMSS this year. That was what, six or seven months after ChatGPT really went public and every single booth had something about AI written in it. Every single side title had something about AI even when they weren't actually talking about artificial intelligence. So before we have this conversation, please tell our audience what do we mean when we talk about generative AI specifically, what is it? What is it not? Tom Lawry (02:48): Generative AI is something that came flying at us literally just about a year ago. So it's brand new. Behind generative AI is what's striving in this what's known as large language models of which ChatGPT is a flavor. So essentially generative AI is something that defines and evaluates patterns found in text and images to generate human-like interactions. Hence, generative AI. Rae Woods (03:10): Yeah, generative AI is not something that is magic, right? It's not sentient, at least not yet. In fact, one of my colleagues refers to it as the world's greatest plagiarizer. It's just using these huge, like you said, these large language models to predict the next thing. And when I speak to health leaders about AI, I'll tell you, I get one of two responses, one of which I already said. They think this is magic. This is the solution to all of my problems, the silver bullet for everything or I am met with fear. I don't think either is correct. What is the realist view? Where should leaders place themselves on this spectrum between AI is magic and I should be terrified of this thing. Tom Lawry (03:52): First of all, let's start with generative AI came flying at us last year. What I would say is some of the challenge which is producing fear, I think is less the capabilities and more the velocity of change it's driving for leaders to lead, and we can go back to that in a second because economists have a whole narrative on that. But essentially, think about this. In the history of humankind, nothing has ever gone as fast as ChatGPT which is the flavor of large language models. It reached a hundred million users in two months last fall. And right after that happened, they went from version three to version four, which makes version three look rather pedestrian. So that's what we're phasing. And then beyond that, we need to put it in the context of generative AI is one of many aspects of AI in general coming into health and medicine. Rae Woods (04:46): What other kinds of AI do we need to be aware of? Tom Lawry (04:48): So think about it, all the things in the lay publications and the clinical journals about AI, almost all of it in the past year has been focused on generative. But what's interesting is a simplified taxonomy is we've been using predictive AI in healthcare for at least a decade. So predictive is where we basically define and evaluate patterns to predict something. It uses math, generative AI uses language. The time and use for predictive AI has been over 10 years. If we push it generative AI, we've had some limited experiments in health and medicine for maybe 10 months. So it's always important to say, let's put it into perspective. Everyone's early in the journey in AI in general, specifically generative AI. We're super early in the journey, but we'll figure it out. Rae Woods (05:38): And even though generative AI we're very, very new and the velocity of change is dramatic, as you said, the concept of AI in healthcare is not something that is necessarily new. And that gives folks a little bit more of that realist approach to how they should be thinking about AI. You've said in the past that this ultra negative perception is wrong and that AI might actually have a PR problem. And I agree with that. In fact, I think one of the things that I've noticed changed the most is that a lot of doctors have went from being very, very afraid of this thing, even thinking, I don't want robots to take my job to starting to believe, hey, I want this in my practice. I'm going to be using chat GBT on my phone even if my hospital has blocked it from the internet. And I think that's a positive shift. Have you felt that among physicians in particular as well? Tom Lawry (06:35): I think more are leaning in. I think back to what you said, AI has a PR problem, especially when it comes to health and medicine. We're having the wrong narrative. Everyone's talking about AI as a technology. The narrative to me is, done right, AI is about empowerment, about empowering clinicians and consumers to be better at what they want to do is restoring clinicians to do things like what Dr. Eric Topol talks about, which is keyboard liberation. Plenty of studies, including my favorite from Stanford shows many physicians spend more time doing administrative work than they do seeing patients, which is just flat out wrong. They didn't go to school to become data entry clerks. AI gives us the ability to do so much more when it comes to restoring power and allowing everyone to play to a much higher value. Rae Woods (07:31): I totally agree with this perception, and even though I'm starting to feel the kind of waves change when it comes to some of the physicians I'm talking to, I also want to give voice to the broader narrative about AI right now, not just in healthcare, but in the world. And the best way that I can contextualize this is the five plus months I think at this point strike that the actors and writers have been facing. There's a lot of parts to this strike, but one of them that has reached the public zeitgeist is the fact that some of the studios, some of the streaming services are saying, hey, AI can replace background actors. AI can replace writers' work. Of course the actors and the writers push back on that idea. But my question is, if that's some of the external narrative around AI, why shouldn't doctors and clinicians and health leaders be kind of equally afraid of AI when it comes to their jobs? Tom Lawry (08:30): Well, I'm chuckling because I think a lot of them are, and that's part of the challenge. And so there's so many things to unpack in what you just said. So let's start out with the labor movement that's happening in Southern California, well across the country, but right now all the protests are in Los Angeles. Here's the thing for anyone listening that's part of any health and medical organization anywhere, simple question. Is AI part of your HR plan? Because if not, you're going to be greatly behind the curve. Our ability to use AI to transform healthcare is coming. (09:06): There's a lot of fear, there's a lot of misperceptions, but it's going to be the single greatest issue defining the future of work in the next decade. Two, when I say that there's a tremendous digital upskilling that's going to be needed across all workers starting with the C-suite. So I look at that and say, you just cited an example, but it's not a matter of whether AI will impact how you work, it's when and how it will change the nature of what you do. It won't replace you if you're a knowledge worker. It will definitely change the way you do work going forward. Rae Woods (09:45): I love this because it is such an opportunistic mindset. You mentioned briefly the idea that the leadership imperative here is about thinking about the opportunity of AI. And what I often say to leaders is when any market force, when any disruption happens, AI or otherwise, you can't just let this thing happen to you and kind of say, we're going to absorb the consequences. You have to think about what does this shape and change in terms of your strategy. So rather than saying, oh, AI is going to take my jobs, right? It's going to take my work-away instead to say, what is the opportunity here if AI is embedded into the work, whether it's clinical work or otherwise, and that is first of all, very, very optimistic, but it's also actionable. It gives leaders something to do about it. Tom Lawry (10:37): Absolutely. So let's just stay with that for a second and let's focus on physicians and nurses. So we all know we always use polite terms about workforce burnout. The way the system works, we are harming the very people we rely on to care for us. So when I look at AI, there are two very definite things we can be doing right now. Number one is what I talked about earlier. Dr. Topple refers to as keyboard liberation. Say there's one study that shows up to a third of all activities done by physicians and nurses could be automated with the use of AI. Rae Woods (11:15): And that's important because I have my own data that shows that two thirds of clinicians' times are spent not interacting with patients, not interacting with patients at all. Tom Lawry (11:24): So let's take what you just said. Imagine going to a physician to say, or a nurse and saying, what if I could give you a third of your time back? What would you do? Now first of all, it's not taking them away from the things they went to school for that intrinsically they decided to be a doctor or nurse. That's what they wanted to do. Instead, it's automating those highly repetitive low value activities, but imagine a third of their time back to see more patients or spend more time with patients to do research or to do something radical like get home for dinner with their family more often. So that's the value. (11:59): So there's a second thing that I don't think is getting a lot of coverage that I think is super important, and that's what I call reducing cognitive burden. So when you think about the data explosion everyone's talking about as a problem, which I see as a huge opportunity, a newly trained physician in 1950 would go their entire practice 50 years before medical knowledge doubled. Today, medical knowledge is doubling every 72 days. So think about that. The best trained physician coming out of the best medical school, the best residency going into practice knows that a couple of months from now, there'll be twice as much information to deal with. Rae Woods (12:37): And it's impossible to rely on humans alone to train and educate and keep up with medical knowledge doubling every 72 days. That's just not an option. Tom Lawry (12:46): Exactly. And that's where the value prop of AI comes in. Which brings us back to is it going to replace knowledge workers? No. Is it going to change the way you work? Yes. Is it going to allow you to do things like tame the data explosion, bringing in behind you with the proper tools to make you better at something you care about? That's the value proposition. That's what I talk about when I say AI done right is about empowerment. Rae Woods (13:13): AI, when done right is about empowerment. Let's talk about some of the ways that that empowerment can show up in healthcare. And I think that a lot of health leaders are rightfully focused on the pairing of artificial intelligence and the workforce crisis, whether it's burnout, whether it's clinical decision-making like you're talking about, but that barely scratches the surface on what AI can do for healthcare today, let alone what it's going to be able to do for healthcare in the next 5 to 10 years. I'm not going to ask you to list all of the possibilities here. I don't actually think that's possible, but how can we help our listeners start to think about the opportunities that AI presents in healthcare? Tom Lawry (13:56): Well, let's go back to your earlier question about large language models that then specifically one version or flavor that everyone knows ChatGPT. There's been a lot of hype. There's been a lot of fear. If anyone says they've got large language models all figured out, they're either naive, they're trying to sell you something or they're making great use of the relaxed cannabis laws in their homeschool. Rae Woods (14:23): And that goes for you too, my friend, right? Tom Lawry (14:27): No, I've been at this at least 10 years working with health of medical leaders around the world, and I'm still very much new to the journey. But anyway, to your question, if we look at ChatGPT, despite all the hype, there's a lot of good clinical evidence starting to emerge so that there's growing body of evidence that shows generative AI, ChatGPT can do things very well, like write clinical notes in standard formats such as SOAP, which is the subjective objective assessment of plan. Assign medical codes such as CBT and ICD-10 generate plausible evidence-based hypotheses and other things like interpret complex lab results. (15:06): Now, if we tease that out, whether you're a practicing physician and that's helping save you time or it's being applied in a way that helps simplify a lot of the documents, we often foist on health consumers to make those things easier to read, digest, and understand, these are things we've already seen just within 10 months that it can do, and some are starting to apply them in ways that do things like reduce cognitive burden, reduce those repetitive administrative activities. So I mean, there are many other things that it can do, and those are just a few. Rae Woods (16:57): One of my colleagues refers to where we are today as the flip phone era of generative AI. And if you had told me in 2004 when I had a flip phone that my iPhone could do all of the things that it does, I wouldn't have believed you. And so that's why it's important to, first of all, reject anyone who says, I actually know everything about AI, but to take these early examples and keep pushing and keep changing and keep adopting, and that's one of the reasons why... Tom, I actually want to reveal to you how these conversations work. When a leader asks me about AI and I am going to do something risky and on air right now as we're recording, I want you to tell me if how I'm reacting to them is right or wrong. (17:47): Leaders come to me all the time and lots of folks at Advisory Board essentially begging and saying, Rae, just tell me what my AI strategy should be. It's the fall, I'm thinking about 2024, what should my AI strategy be in 2024? And my response, Tom, is, you shouldn't have an AI strategy. You should be thinking about how AI can solve existing problems, many of which you named or how AI can support the opportunities that you've already defined as a business. What do you think of that response? Tom Lawry (18:24): I think you're right. When you try and think, well, if I hire the right people, I can wedge this into my three-year strategic plan. I think broadly the answer is yes, you need to be doing that, but the way in which it's moving, the shape it has taking, I mean, I am full-time on this, and I have been for over a decade specific to health and medicine, and I was being interviewed by another major media outlet who kept saying, where will it be a year from now? And my honest answer is no one knows. But knowing certain things like how AI drives value, knowing that that value at scale across a large healthcare organization is going to be highly dependent on mobilizing your workforce. I mean, these are all things within the purview of what leaders do today. And I teach whole courses. I do a lot of advising at the C-suite level on these very things. (19:28): And finally, I'd say it's okay to say, I don't know at this point, it's coming at us fast. It's a journey. Again, we're all early on, and whether you're the CEO of a prestigious medical organization, you're the top neurosurgeon in your field. Everyone, when we're hearing this, the number one question they're asking themselves is, well, what does this mean to me and the work I do? And so we start there. There's so many other things that good leaders already know how to do. It's just a matter of how they adapt those skills to the new environment. And then let me stop, but I'd also like to come back to, I think part of the challenge is not the AI capabilities, but the velocity of change leaders have to lead in. Rae Woods (20:12): And so when it comes to the velocity of change that leaders have to lead in, my thought is that's why I really want folks, instead of saying, what should my AI strategy be for 2024 to say, you need to get your governance structure for AI set up now so that when things change, when you are facing how to use artificial intelligence to address a problem or to advance towards an opportunity, you have the decision-making power in place. Is that something that will help and support this leadership imperative that you're describing? Tom Lawry (20:48): Absolutely. I think you said it well, leaders are a position to lead. So economists tell us the whole world is almost always driven by what they call linear growth and change. This is where quickly there's a new technology that produces some incremental value. There's a gap between when is created and when adoption occurs. As adoption occurs, there's another gap between that and when regulators run to catch up and put guardrails on it. Rae Woods (21:12): Which is a big pushback that we hear back to fear that we started off this conversation with. Tom Lawry (21:18): But the thing is, leaders are taught to lead, systems are geared towards, it's predictable, it's safe. Economists also talk about something called the exponential growth curve, which is what I believe we've been in at least since last fall. And so economists talk about that. Mathematicians have formulas actually calculated, but exponential growth is something that starts looking like linear change, and all of a sudden if we were to plot, it looks like that hockey stick. So when that happens, the way leaders lead with linear change just doesn't work real well. So we're in it now. (21:51): The good news about exponential growth is economists tell us it doesn't last for long, but when it does, it feels like we're all inside a tornado, which is where I think we've been for the last year. So leaders recognizing this and looking at, do I need to lead differently? There's a lot of writing and some research being done to say maybe we're entering a period of what's going to become known as chaotic innovation, where innovators will still innovate, but it's going to be out of pace in a way that's much more chaotic than where we've been when it comes to linear growth. Rae Woods (22:24): And again, that's where the importance of leadership is, I just don't know that it's been stronger in the case of technology, in healthcare at least. And I really want to push health leaders to think about the decisions that they make and the trade-offs that they make. And I don't want to sound like a broken record, but I keep coming back to root, your next step in actual business problems or actual business goals. Because then when you're inevitably faced with a trade-off, you can hold that goal sacred, right? Tom, you were talking about the workforce crisis as an example. This is a big one. So if you are using AI to take work off of say, a physician or nurse's plate, because that's the essential problem you need to solve, then you can't just add on a bunch more appointments or a bunch of other administrative burden onto the clinician because that goes against the very goal that you were setting out to achieve, right? And so that's where I think the leadership imperative is so essential to navigating us through, to your point, chaotic innovation that we're in the middle of right now. Tom Lawry (23:34): Yeah. Well, let's take what you just said, and I'll do what I call the tactical and practical. So anyone listening, if you're part of healthcare, you're a leader, AI drives value. The technology itself doesn't drive value. The technology allows us to do things like process innovation, reimagine, rework, things like clinical and operational workflow processes. When you get that, then all of a sudden you're on the right path. But then the first question is, well, does your organization have a team of Lean or Six Sigma folks? If yes, are they involved in the AI process? Because many times when people treat this narrative as a technology play, they hire great people like chief technology officers, chief informatics officers, but process change, it has a path. We have experts that know how to help. Second, as I said earlier, is AI part of your HR plan? Is whoever's the head of hr, your chief person people officer, are they actively engaged because the best technologist, coming after the best solutions will only drive value as scale if you've got the whole organization mobilized going in the right direction. (24:44): And then finally, because I always like to talk in threes, is AI part of your diversity, equity, and inclusion plan? Because you can be doing a lot of great things in the real world, but let's recognize that many of the biases and inequities in the real world of health and they exist are starting to cross over into the digital world through things like algorithms. I've described this before. It's like squeezing on a balloon. You've got a great DEI plan. You've got great people. They're doing wonderful work in the real world, but they're not addressing the digital world. It's like squeezing a balloon. It's like, I've got it here and it's popping up somewhere else. But it's things like that that leaders ought to be able to get their arms around. And again, think about everything I just said. There's a little technology driving a lot of change, but leaders are doing those things today, Rae Woods (25:34): And I like that what you're describing is maybe not have a AI strategy. You still need the governance structure, which we described, but also think about how you embed AI into the existing processes that you have into your diversity, equity and inclusion into your HR strategy, embedding this new, changing chaotic technology into the way that we practice business, which by the way, should not be new for health leaders. They've done that before, perhaps not with AI, but they have absolutely done that before. (26:03): You also said something that's just itching the back of my brain, Tom, you said there is a gap between when innovation happens and when it's implemented and there's a gap between when it's implemented and when you have regulation. One of the things that health leaders are grappling with right now is how fast do they move? Everyone likes to talk about the first mover advantage when it comes to basically anything, but in particular when it comes to innovation, does it have to be a first mover advantage with AI or is there room for the incremental advantage? Let's not ignore this. We're going to dive in, but we're going to continuously improve and continuously push. Is there room for that when it comes to artificial intelligence? Tom Lawry (26:51): What I would say is organizations and leaders will go with their own speed, many of which right now are asking the question of, well, can I just wait this out? And I think the answer there is you can, but if I'm correct in where the market's going, that makes you more likely to become less successful potentially even, dare I say roadkill, I'm not pushing a mantra of jumping in and going fast for the sake of going fast. There's a lot of dialogue, there's a lot of things happening, including others that are great proponents of, well, let's go slow or there's been several moratoriums propose to say, well, let's stop development of generative AI for six months while we figure it out. And when I hear things like that and let's slow down or stop the progress. To me, that's like saying, I'm going to do a moratorium tomorrow to say starting next month we're going to suspend gravity. (27:47): It's happening. It will continue to happen. It's more about where you start getting engaged to do things that are in the best interest of your organization, your people, and your mission. And so for everyone who says, let's go slow, I would say, no, let's modify that to say it's not about going slow, it's about getting certain things right, like appropriate guardrails. So we are not going to cause any harm or no greater harm than what we're already causing by not using it. Bringing our workforce, our clinical leaders along actually not only bringing them along, but putting them at the front of the parade to start figuring out where are the top places we can use AI to drive positive change. Rae Woods (28:37): You're talking right now about all of the things that health leaders can do today to ensure that their adoption of AI can be successful even if they're not out being a complete first mover. But here's the thing, healthcare is not exactly known for being tech forward, right? I joke that healthcare is one of two businesses that are keeping the fax machine industry afloat. My question is, why should we think about AI any differently? Why are you optimistic still about the opportunity and the change that can happen in healthcare if our industry tends to be so behind with technology? Tom Lawry (29:22): Well, I'm going to answer that by telling a story I've told before where a while back, Forbes came to me have said, do an article on what healthcare learned from fighting a global pandemic. And I had three things. One, I wrote about how the health industry including its leaders, showed that an industry that's known for moving at glacial speed could in fact move at warp speed. Two, many and again, I'll say many, not all leaders demonstrated they were willing to challenge the status quo, and they were willing to drive agile transformation. And then if you look closely, the third point is just to recognize humans, the clinicians, the leaders fought and won this battle, at least I think we did, but the tools they used to go faster were all AI driven. And then the article goes on to tease out how AI was used to help fast-forward on the vaccine telehealth and a number of things we've shown that when properly motivated, we can do it. (30:22): And the close of the article is what I'm still pitching today. It's like, well, let's take everything we learned from fighting that global pandemic and apply it to start solving all of those other big problems we have. So I have faith not necessarily easy. I think there's a huge amount of inertia to get over, which troubles me because healthcare is a very noble cause. It has significant challenges that cannot and will not be solved by our old ways of thinking and working. To me, AI gives us a new set of tools, and while we have to figure some things out like the guardrails, we don't have to wait for Congress. We don't have to wait for anyone to start looking at how we use these tools to do inherently good work against the missions that everyone's trying to pursue. Rae Woods (31:12): Well, that was the mic drop moment right there. Tom, thank you so much for coming back on Radio Advisory. I cannot believe how much has changed in the years since you've been on the podcast. And as you said, things are just going to continue to change and you are just going to continue to do great work. So thank you for everything that you've done. Tom Lawry (31:32): Well, thank you. And I always love coming on your show. Advisory Board is doing great work when it comes to just championing all those things. You're a great connector with leaders. So for that, we also thank you. Rae Woods (31:42): Oh, thank you. That's so kind of you. That's so kind of you. Well, thank you again for coming on Radio Advisory. (31:51): My biggest takeaway from this episode is that AI has a PR problem. And look, I'm not trying to say that AI is perfect. It is certainly not. In fact, the very first conversation I had with Tom was all about how bias exists in AI just like it does in the rest of healthcare. But by accepting that alone would be completely passive. It is up to leaders. It is up to you to think about the opportunity both in the near term and the long term that artificial intelligence presents for your business, for your clinicians, and for your patients. And remember, as always, we are here to help. (32:39): If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts, and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Katy Anderson, Kristin Myers, and Atticus Raasch. The episode was edited by Katy Anderson with technical support by Dan Tayag, Chris Phelps and Joe Shrum. Additional support was provided by John League, Ty Aderhold, Carson Sisk, Leanne Elston, and Erin Collins. Thanks for listening.