Solomon Banjo (00:02): From Advisory Board, we're bringing you a Radio Advisory, your weekly download on how to untangle healthcare's most pressing challenges. My name is not Rachel Woods. It's Solomon Banjo. I lead advisory board's, life scientist research, and the firm's research on the evolving sources and use of medical evidence. Rae asked me to host this week to talk about our predictions for the future of clinical decision-making. (00:24): Now, that's actually based on research we wrapped in November of last year, but I'm glad it took us this long to have this conversation because so much has changed since then. Large language models in AI have exploded into public consciousness and adoption and the exam room is no different. The future of clinical decision-making isn't in a decade, it's now, and our collective actions today and the next day will determine if that's ultimately better for patients. I brought on my colleague Amanda, as well as Dr. Kevin Larsen, Optum's Senior VP of Clinical Innovation, to talk about the opportunities of these technologies and what organizations should be thinking about as they integrate them into clinician workflow. Hi, Amanda, Kevin. Welcome to Radio Advisory. Amanda Okaka (01:12): Hey, Solomon. Kevin Larsen (01:13): Happy to be here. Solomon Banjo (01:15): Amanda, I know you've been on Radio Advisory before. Kevin, we've had you on webinars as a panelist. We've had you in person for executive round tables, but this is your first time on Radio Advisory, so glad to have you here and just grateful to have your expertise on this topic. Kevin Larsen (01:31): Looking forward to it. I love podcasts. Solomon Banjo (01:37): Well, obviously, there's a lot we could touch on here, and I think when people hear "clinical decision-making," they often think about clinical decision support, so the alerts in an EHR, but that really only barely scratches the surface of what you think about in, Amanda, your research, so I want to talk about the technologies that will advance clinical decision-making in the next decade. So when you think about that prompt, what are you thinking about Kevin? Kevin Larsen (02:05): So there's a whole professional society called the Society of Medical Decision-Making and I got to speak at one of their meetings. It was really cool. Part of what I think about are the known issues with how we make decisions as people like proximity bias, "If something just happened to me, I think it's going to happen again," and so how do we have technologies that help us see where we have those cognitive biases and those blind spots? I think that's going to be a big opportunity for technology. Solomon Banjo (02:36): Awesome. Now, Amanda, we work pretty closely, but that's not to presume I know your answer, so when you think about the technologies changing the next decade of clinical decision-making, what you got? Amanda Okaka (02:47): I tend to think about AI, which has been, I think on the forefront of everyone's mind for quite a while, but especially the last few months this conversation has really come to the forefront because I think we're getting to a point where folks from all across the industry are really wanting to separate the hype from the reality. There's a real optimism for this technology to revolutionize clinical decision-making and clinical decision support, but we're in a time where we really need to figure out what's what's real, what's actually possible, and what is just hype. Kevin Larsen (03:25): I couldn't agree more. I also think that we jump ahead too fast to the big solution, the big home run, and I think really we're going to increment our way into this rather than going right for what's the big home run. Solomon Banjo (03:39): And I love that call to attention to detail because it is so easy, especially now, AI, ChatGPT, everyone's thinking about what it could mean, and I do want to focus on what it could mean, but I do want to get more specific because these technologies have the potential to have significant implications, whether you're a patient, clinician, or a provider. I'm thinking about the research we've done, Amanda, and how one of our predictions is that for the first time in all of human history, clinical decision-making will not be memory-based, it'll be technology-assisted. So when you think about that, what does that mean to you, and why is that good for the healthcare industry? I'm going to start with you, Amanda. Amanda Okaka (04:23): That phrasing, the shift from memory-based to technology-assisted is really the most succinct and apt way to describe what we're witnessing. I don't remember the last time that I was in an exam room or in a patient-provider interaction and there wasn't a computer in the room, I didn't check in on an iPad. All of these sorts of things are really becoming standard in our day-to-day practice and I think that that's what it means is having to be aware and being in tune with these technologies both as patients and providers. I think that there has to be learning on both sides for that and that's why I think it's good for the healthcare industry because technology really has the ability to empower both patients and providers, and as we're seeing more sources of data and evidence and a better infrastructure for integrating that data and evidence to actually drive decision-making, I think that it will really benefit the healthcare industry to help us be able to be more proactive in our approach to treatment and diagnosis as well as how we stratify patients and their risk levels. Kevin Larsen (05:39): I think it's going to be safer. I also think that we can look to other parts of our experience in the modern world where this has happened already. So when people say, "Oh, I'm so scared of ChatGPT. Kids aren't going to learn anything." Hey, kids right now use calculators in math class all the time. Do I think that's a bad thing that they're not memorizing long division? No, I hated long division and I always have a calculator wherever I go. Why should I ever have to do long division again? I think there are many parts of clinical medicine that are going to be the same way where we're going to have this indispensable tool that's already always in our pocket. We're going to use it at a time when we need it, but it's again, not necessarily going to be the big thing. It's going to be the little calculator in my pocket that actually I don't have to rely on my memory to do this thing, I can use this tool. Solomon Banjo (06:35): You brought up earlier, Kevin, when you're talking about your talk, the fact that thinking about the different ways in which we have bias in decision-making. I'm curious how you maybe tie that or don't tie that to when we think about things like artificial intelligence being used in clinical decision-making. Do you think there's reason to be optimistic there? Kevin Larsen (06:56): Yes, there's both reason to be optimistic and reason to be cautious because artificial intelligence can be a black box to a lot of us. We'd say, "Oh, the computer's going to do this and the computer's going to figure it out. The computer will search the web and find the answer." Well, the web is made by us. We're imperfect. We're imperfect as society, we're imperfect as individuals. It's going to learn from everything that's already out there and what's out there is not perfect, so we have to always be looking inside that black box, figuring out, "Okay, what did it learn from? Was that reasonable? How do I test that this is actually fair and unbiased?" But other times it's going to be that calculator for me that doesn't see race, doesn't see anything else, it's just a calculator and it says, "Here are the numbers that I see. Here are the important inputs. This is what the answer is going to be," so I think it's going to be a double-edged sword. It's both an opportunity and a risk. Solomon Banjo (07:58): I love that. I want to switch a little bit because we talking about the technologies, but ultimately if it's being used in healthcare, it's going to be impacting patients. Now, something the headlines often overlook 'cause they're very optimistic, is the fact that there's a sizeable, and so far, the data I see the majority of Americans are skeptical or cautious about AI being used in their actual care. How do you see this technology impacting patients and maybe even their relationships with their healthcare professionals that they're interacting with? Kevin Larsen (08:32): So AI is a big, broad thing that encompasses lots of different tools and lots of different techniques. I think it's already being used in some ways, so for example, if any of us have used things like translator apps, we know that that translation can be indispensable. Right now, I can take my cell phone out and go to a foreign country and I can actually interact through the translation in my cell phone in a way I never could do 10 years ago. That kind of AI is going to enter into clinical care all over the place to help us in ways that we've wanted that help. It may or may not be better than human beings who are translators. Sometimes people have their children do the interpretation for them, and we know that the kids often get it wrong, so is a phone interpreter better than a child? Probably. However, as a patient, do I want a computer telling me what disease I have? Not yet. I don't think that I trust that computer yet to have that kind of decision-making expertise. Amanda Okaka (09:39): I think that's a really good point about having to instill trust in these technologies for patients, and what that looks like for me, I think that there's a few different ways that we can think about how this technology will impact patients. I think there's some more impacts that are a little bit more obvious, and then there are some that are more operational that might not impact patients every time they are interacting with the healthcare provider. But I think at the heart of it, the future state of clinical decision support needs to create technology that empowers both providers to provide more proactive and higher quality care and enable intervention, but also empower patients to be able to communicate with their healthcare providers and have better access to the healthcare system and also be able to learn about their conditions and potentially self-manage them. Solomon Banjo (10:42): I wanted to pull on a thread you teased, Amanda, which is the empowered patient. So Kevin, you've worn a lot of hats over your career, but one constant is as physician. So when you think about that empowered patient Amanda's talking about with the ability to ChatGPT my questions, forget Dr. Google or Dr. WebMD, what comes to mind for you wearing your clinician hat? Kevin Larsen (11:08): Well, to the same way that you described decision-making is no longer going to be memory-based, that's really what doctors have held for a long time is that memory, and so this levels the playing field in many ways. Now we're interpreters, we're consultants as doctors as opposed to being the holders of all that knowledge and information, and hopefully these systems are eventually able to see patterns that even we as doctors, trained to see patterns, that we aren't seeing, so additional patterns will emerge. (11:42): I think people that experience what we call a diagnostic odyssey, people that have to go to lots of doctors and get lots of tests to finally five years later figure out what was wrong, I hope those people are going to have a much faster time and a much shorter time because the computer can see so many more options and so many more patterns. This levels that playing field and takes away some of that information divide between doctors and patients so patients actually have information and hopefully more autonomy in making their decisions. But that doesn't take away my responsibility to give you good advice and good consultation. Solomon Banjo (12:24): When I think about your math example of, "Okay, no longer hand long division, now we're moving to calculators," or those of us who moved from MapQuest to Google Maps. My sense of direction personally has really deteriorated. I feel like when we are hearing from providers, there's a lot of fear that comes up, but one of them is what do we lose by being technology-assisted as opposed to coming from a place of being memory-based, of that repetition of the art and the science? What do you say to your fellow clinicians who may have a fear like that? Kevin Larsen (13:01): We've always been technology-assisted. We used to use slide rulers. I don't know how to use a slide rule, but my physics teacher in high school could do it way faster than I could use a calculator. Did I miss out on a really important skill because I can't use a slide ruler? I don't think so. My grandfather was terrific at repairing old-timey tractors. That was a great skill that he had in his memory. Do I have that skill? No. Do I need the skill to fix old-timey tractors? I do not need that skill. So some of this is to give up some of these skills that have been important to us, but realize that they're no longer are they the valuable skill in the world. We have to be in a constant learning mode, lifelong learners, and build new skills on top of the new technologies we have. Solomon Banjo (14:57): So I've talked about the fears of providers, and I'll talk about one of maybe my fears when we were doing this research, which is that these technologies have a lot of potential, granted, and we've talked a lot about that, but they're also really expensive, and also sometimes to get the most out of it, you need to bring together a lot of different expertise to really fine-tune the algorithm to your patient population, for example. So where my fear starts to come, is this going to create a situation where only systems who can afford these tools have access, and if so, what are then the equity implications for the patients who are or are not served by those systems able to access these tools? Amanda, I'd love to hear from you first. Amanda Okaka (15:45): Thinking about what it would look like if only the well-resourced health systems and providers are able to get to that sort of future state in the near future and we see smaller organizations being left behind, I think that this could look a lot of different ways, but at its core, it means that health systems have to work closely with internal and external stakeholders to implement these types of technologies with ethics in mind. It's imperative that they take those sort of first steps to make sure that these technologies are able to be implemented in a way that is fair, responsible, and transparent so that patients can be aware of the safeguards that they have in respect to this technology in terms of privacy, security, and accountability for if something goes wrong. I think that that's really some of the first steps to being able to implement this technology in a way that has equity in mind. Kevin Larsen (16:46): I think we're going to see a lot of market segmentation and differentiation here. If you think about what's available in AI right now, so much of it is commercially available based on open-source web tools. Well, what a perfect space. If you're a low-income or a poorly resourced organization, suddenly there's this whole bunch of tools that you can just use your iPhone or your computer to run, that translator tool, calculator tools. All sorts of optimization tool are going to become available in an open-source kind of marketplace. (17:22): Then yes, health systems will continue to use these tools in a way that differentiates them as well-resourced organizations, but remember that can sometimes come with risk. We know that often the really expensive stuff isn't well-proven and it can expose you to harm in ways that if you're using the low-cost stuff isn't exposing you to harm because it's been around a lot longer and we understand it a lot better. I think we have some of those same risks with this kind of technology adoption. Solomon Banjo (17:56): That makes a lot of sense. Amanda Okaka (17:58): Another consideration in response to that is sort of building on what you had said earlier about not just running towards the biggest solutions, but making sure that this technology is being implemented and designed around solving a specific problem or a systemic problem rather than just acquiring this sort of technology to be part of those well-resourced organizations, to be sort of held in that same reputation or regard. I think it's really important to coalesce around a reason and a purpose for these sorts of technologies rather than just running towards them. Kevin Larsen (18:38): Totally agree. Solomon Banjo (18:41): As I was thinking about this, we talk about the technology almost as if it's the only element of this equation with agency and it's like, well, how do we choose to apply this? To what problems? Who do we consider as the target audience for it? So I love that, I would love along those lines, Kevin, beyond just having the tech technology, let's assume the best happens. What other advice would you give in terms of how organizations should think about this at a more fundamental level in order to actually see beneficial patient outcomes? Kevin Larsen (19:15): I think we are going to have to really redesign how we do patient care, how we manage disease, chronic disease, other things. I think in this early space, we are all going to have a mental model that says, "Oh, let's just take the work we always believe we have to do and have the computer do it instead. Let's have the computer write all of our letters. Let's have the computer write all of our notes." Well, why do we need all those notes in all those letters? That's a bigger question. What is it we really need? And once we have all this kind of embedded in the system, how do we not just automate the paper world and call everything a file and everything a chart and everything a ...? What is it that's really important here and how do we layer these tools in to new care models and new ways of work that are really different? (20:04): I put myself back in like 2000 when I got my first flip phone. I would never have imagined what my iPhone can do when I had that first flip phone. We are in that flip phone era of ChatGPT. What is it going to look like in 20 years? God only knows. I cannot predict it, but I know we're in the flip phone era of ChatGPT. Solomon Banjo (20:26): I think that is amazing, amazing advice for us all to take and I think it's important because ultimately a human has to be in this equation and if we just use these two to do everything we've been doing faster, we already have enough burnout and other challenges in healthcare without not revisiting those systems. Amanda Okaka (20:47): It's really important that you mention at the end of the day, it's going to be a human on one end of this technology, so a really important question for the ecosystem to grapple with is how do we think about this question in a way that gets us to improving best practices and actually reinforcing behavioral changes for providers instead of just thinking about it in terms of a bottom line or having a sort of technology in order to bolster up a portfolio or seeing it as a nice to have, but rather having it fully integrated into the care delivery process in a way that helps providers do their jobs better. Solomon Banjo (21:31): And I'd even push based on our research, we talk about providers, let's not assume that the same human who does it in the flip phone era is going to be the same human doing it when we have our iPhone moment, and so how does this enable us to think about the care team differently? So we've talked a lot about provider organizations, clinicians, patients, even, but probably unsurprising given where you and I sit in the Advisory Board world, Amanda, with life sciences, talk about other players in the industry. So what are some other stakeholders to keep an eye on as we think about how clinical decision-making will continue to evolve? Kevin Larsen (22:15): Insurance companies. I work at Optum, not at UHC, but we are all a part of one big company that is the country's largest private insurance company and there's a lot of activity on the insurance company side to say, "How do we use this tool, get rid of administrative burden when we can? How do we use it to help patients, help patients navigate, help patients get to the right place? How do we use it to detect fraud? How do we use it to identify and target the right people for outreach?" There are lots of different ways that this is being explored by the insurance company industry. (22:57): Certainly, the device organizations as well I think are looking at this, so I personally have done a lot of work in Type 1 diabetes and thinking about things like what's called a closed loop blood sugar sensor to an insulin pump and it's been looking for a technology like this to integrate in, "What's my activity, what am I eating, what are these other factors in my day?" to really help that system work? So I think we'll see innovation in that life sciences device space. We'll see innovation in the government space. We'll see innovation in lots of places. Solomon Banjo (23:37): Excellent. Kevin Larsen (23:38): One of the places I'm most excited to see is in learning a foreign language. I don't know if you've seen it, I think this could be the case for doctors, too, but with ChatGPT, you can set it up to have a conversation with you in a foreign language, and some of these companies are now actually able to make it have the right kind of vocabulary for where you are in your learning. Imagine how this could then work as coaching and work in the other kinds of knowledge acquisition. Think about learning to be a nurse or learning to give a patient bad news. You can imagine building a system that talks back to you and responds to your questions in a fairly naturalistic way and lets you practice what it is you need to do to give a patient bad news. Solomon Banjo (24:25): And I love that, just even thinking, too, imagine the weight lifted off patients for whom in our context, English is not their primary language. If they can interact with a healthcare system on their terms, that could be truly transformative. (24:40): Okay, so if the rise in new technologies and AI mean that the future of clinical decision-making is now, and I think all three of us believe that is the case, what is one thing you want our listeners to do to prepare for that new reality? Kevin Larsen (24:56): I would want them to pause and think about how they make decisions now. A lot of us make decisions fairly automatically, and if you go and observe doctors in exam rooms, they've usually made a diagnosis within the first 30 seconds of seeing you, even if your visit is 10 minutes long, and they've usually decided what to do within the first minute, and so the question is, "How do you reflect and when and how am I making decisions?" When and how would technology help me? It's actually hard to get into the actual time a doctor's deciding because we're trained to do it so quickly, and in order for these tools to really help us, we have to find the places and times that we're most open to new ideas and new inputs and it's not going to be when you're out of the exam room already ready to write your note and all of your prescriptions are done. That's not the time that it will help you. So it's pausing, reflecting in yourself, "When and how am I making a decision? When and how would I want help?" Amanda Okaka (26:06): It's imperative for our listeners to consider the conversations that they need to have with both internal and external stakeholders and also think creatively about the ways in which this technology can and will be applied in the coming years to drive those conversations. Thinking about the proliferation of this technology and ensuring some of the things that we talked about, like making sure that it's applied as a systemic solution and not point solutions, making sure that it's applied in ways that are equitable and will benefit patients for the best, stakeholder conversations are essential to making sure that this can be achieved and having them sooner rather than later is always going to result in better outcomes and ensuring that these changes can be made easier, with more agility, and for the better. Kevin Larsen (27:02): Can I get a second one, Solomon? So my second one is don't turn off your critical analysis brain. As we, again, speaking to doctors and other clinicians, we've been taught how to analyze literature. We've been taught how to analyze new tools. Don't think this is some kind of magic new tool that does something that is so impossible for me to understand or evaluate. Think about it like a black box intervention. How do I know what the risks are? How do I know what the outcomes are likely to be? I don't have to understand how it all works in the middle, but when someone says, "Hey, we're going to give you this thing," ask them, "What have you done to evaluate the risk? How do I know this thing is better than standard of care?" Ask all those same kind of questions that you'd ask of any new device or any new drug that comes to you so that we continue at collectively to wear our critical thinking hats and not just think that the Wizard of Oz is there and realize there was nothing behind the curtain. Solomon Banjo (28:08): Kevin, I love what you said there because we know how much patients trust their clinicians, and so being able to have those questions answered, those risks and benefits articulated by clinicians and health systems I think is going to go a long way to helping patients feel more comfortable with this direction of travel. (28:28): And I love what you said, Amanda, about partnership, because one of the things in our research that was a constant theme or insight for why it is so hard to progress clinical decision-making is there's no one stakeholder who owns it all. Clinicians own a piece, providers own a piece, patients own a piece, life sciences, list goes on and on, and so we can't afford to be disjointed in our approach here. (28:54): Well, this is not the first conversation we've had on this topic. It won't be the last, but I want to thank you both for joining me on Radio Advisory today. Amanda Okaka (29:02): Thanks for having me. Kevin Larsen (29:03): Thank you and I promise that I wasn't a chatbot. I'm really a person. Solomon Banjo (29:11): An increasingly important disclaimer. Last November, one of our attendees at the executive round table asked a very prescient question, which was, "What happens when Google gets good?" Now, I think a lot of what Kevin and Amanda got at in discussing AI shows the perils and the upside of when that happens. But my big takeaway is that regardless of what happens with ChatGPT, as we're on our path to that smartphone moment, we all need to be deliberate, deliberate in questioning the systems we're putting into place and deliberate in thinking about the decisions we make and what a better version of them might look for for our organizations and for patients. So while you think about that, don't forget, we're always here to help. Rae Woods (30:09): If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts, and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Solomon Banjo, Katy Anderson, Kristin Myers, and Atticus Raasch. The episode was edited by Joe Schrum with technical support by Chris Phelps and Dan Tayag. Additional support was provided by Carson Sisk and Leanne Elston. Thanks for listening.