Rae Woods (01:02): From Advisory Board, we are bringing you the Radio Advisory, your weekly download on how to untangle healthcare's most pressing challenges. My name is Rachel Woods. You can call me Rae. (01:14): It is no question that we are in an era of unprecedented technological change, large language models, the rise of generative AI. These are the things that have dominated industry conversations for the last year, and as a result, healthcare organizations are questioning how, where and if they use artificial intelligence. We know that some are eager to jump in headfirst. Others are being more tentative and looking to wait and see, and I'm not sure that either is actually the right option. That's why this week I've invited Advisory Board digital health experts Ty Aderhold and Elysia Culver. Together, I want to discuss what even is the right way to think about AI implementation. Hey, Ty. Hey, Elysia. Welcome to Radio Advisory. Ty Aderhold (02:02): Glad to be here. Elysia Culver (02:04): Hello. Rae Woods (02:06): How busy are the two of you right now? It's been almost exactly a year since ChatGPT became a thing in healthcare and in the real world, and it seems to me like things are not exactly slowing down for the digital health and artificial intelligence research team. Ty Aderhold (02:27): Things are pretty busy. I think every week someone else wants to talk to us, and every week I think there's another development or another policy announcement or something that we need to be paying attention to. Elysia Culver (02:41): Yeah. Well, what's nice is people are reaching out to me now. I don't even have to reach out to people for phone calls sometimes because they're so interested in our research, which is super exciting. Rae Woods (02:51): A double-edged sword, though, definitely a double-edged sword. Yeah, I mean, the truth is that AI is still moving incredibly quickly and we are seeing a lot of experimentation and a lot of change, and none of that I think is going to slow down anytime soon, but I want to level-set. When it comes to AI, when it comes to all of this change and maybe even all of this confusion, is there a central question that you are hearing from health leaders, or maybe is there a question that you think they should be asking when it comes to AI? Elysia Culver (03:26): Yeah. I think the biggest question that we're hearing health systems or providers asking is, "What should my AI strategy be?" I think that what the question should really be is what problems should I be solving with AI, not what should my AI strategy be. I think you really should be thinking about what your existing strategy is, and seeing how AI can fit into what you are already doing. Rae Woods (03:59): Yeah, we don't like that question, and frankly, we didn't like that question when folks were asking it about any technology, regardless of just AI. I think it is reasonable for health leaders to be thinking about if there are parallels between artificial intelligence and past healthcare technologies. When it comes to past technologies, I think a lot of folks actually bided their time. They said, "Let me stay patient. Let me figure out what works, what doesn't work. Let me let someone else be the first mover here and do all of that experimentation. I'm going to wait for the right evidence and not have to reinvent the wheel." Should we be thinking about AI the same way? Ty Aderhold (04:42): Rae, I think you're right. We've seen a split in recent years towards a group of organizations that really want to be thought of as innovators and therefore move quickly, but outside of wanting to be thought of as an innovator, it didn't make a ton of sense to move quickly. If you think about something like EMR integration, it was actually a much smoother, a much better process for your organization if you waited a bit. Rae Woods (05:06): In fact, people are still rolling out Epic right now. They have waited quite some time to implement this technology, Ty Aderhold (05:16): The ideal became this idea of a fast follower, of, "I'm going to let those innovators go ahead. They'll figure out what works, what doesn't work. I'll respond after I understand what our approach should be," but I don't think AI is necessarily the place to do that. Rae Woods (05:36): Why? Ty Aderhold (05:37): I think there's a few different reasons. One, as you already mentioned, the pace of change that we've seen across this past year has been incredible, and I don't think it's going to slow down. If you're waiting, there's always going to be something new that you're not necessarily going to be able to learn from the adoption that an organization had a year or two ago. Rae Woods (06:01): There's already perhaps some fundamental differences between how we need to think about AI and how we need to think about other technologies, and let's say that folks have done what Elysia told them to do, which is to say not just what is my AI strategy, but let's make sure AI is actually solving real problems. It sounds like the scales are tipping towards make those decisions soon, right? Be an early adopter. Let's say organizations go down that path. Why might that be a good thing? Elysia Culver (06:28): I think one of the biggest things that early adopters will have an advantage in is the efficiency and cost reduction. With these AI technologies, organizations are able to better automate administrative tasks or analyze large amounts of healthcare data, create this faster access to information for both patients and clinicians, and all of these things I think have the ripple effects of that efficiency and cost reduction. (07:01): Then I think there's also a really big opportunity for healthcare organizations who move quickly to have that lead time to develop this culture and literacy around AI as well. They may be able to initiate those steps to communicate to their workforce early on what it is they're doing, why AI is important, develop this culture of, "Yes, AI is a good thing and we should be adopting this to solve our problems," things like that. Rae Woods (07:30): The cost reduction is something I'm not sure enough people are thinking about. I think most people are thinking about the cost of investing, but you're right that long-term, there's a lot of efficiency to be gained. Early adopters have this lead time to learn quickly. I have to imagine that this is also something that at least can be good for employees, the actual people who are going to be using the technology, and maybe even the people on the other end, the patients who are going to be experiencing the benefits. Is that right? Elysia Culver (07:56): Yeah, that's correct. I think that's a secondary benefit that a lot of people don't always jump right towards. I think cost reduction and efficiency is what people want to focus on, but if you're going to be a provider who's going to make a clinician's life easier, why wouldn't you want to work there? Or if you are going to be able to provide a better patient experience by personalizing their care or improving their accessibility, why wouldn't a patient want to go and see you? I think that those are definitely secondary benefits that we'll see with the implementation of these new AI technologies. Ty Aderhold (08:30): Rae, I just want to double-click on something Elysia talked about, the culture piece, and I think this spans across all of these benefits of early adoption, is the idea that your organization will gain cultural and institutional knowledge of how to use AI and what worked in your early processes as an early adopter, and those lessons will then apply to other AI technologies that you adopt in the future. I don't think that institutional knowledge is going to be something that a fast follower can gain from just reading a case study or looking at how other organizations have approached adoption. Rae Woods (09:12): What kind of organizations are doing this? What kinds of organizations are maybe diving in headfirst? Ty Aderhold (09:18): I think we've seen that same group I mentioned earlier of organizations that pride themselves on being innovators. Oftentimes those are the academic medical centers of the world jumping in, and these are organizations that for years have had some uses of clinical predictive AI that they've been building or piloting themselves. We've seen these organizations move relatively quickly with generative AI as well, things like we've heard of organizations who are doing pilots with Epic to allow physicians to respond to inbound messages with some generative AI prompts to speed up that process. (10:02): We know this is something that a lot of clinicians are feeling overwhelmed with these days. It's those organizations who were already moving this direction, likely already had data scientists and other technical expertise that was employed, that are research institutions. Those are the organizations we're seeing move most quickly here. Rae Woods (10:24): I don't mean to be too blunt with this question, but is it even still possible for folks to move into this early adopter category? I mean, like I said, it's been a year since the ChatGPT, large language models, generative AI, hit our cell phones, let alone strategic plans. Is it still possible for someone to be an early mover if they're making changes in, say, early 2024? Ty Aderhold (10:48): I think you're still on the early side of things if you're moving in 2024. I think the big unknown here is how quickly is this pace of change going to continue, because there could be a new, more generalist AI model that comes out sometime in 2024, and now we're set up for a whole other cycle of early movers again. When it relates specifically to generative AI and some of the ChatGPT style uses, I think the window's closing on being an early mover. I think Elysia would probably tell you that's not the end of the world. Elysia Culver (11:30): Yeah. I would definitely think that, if you're speaking in generalized terms, that this early mover period is probably ended, but it is a little bit more nuanced than that in terms of the types of use cases that you're exploring. Again, the pace of change is making things move so fast that maybe you were an early mover on one thing, and then this new thing comes out and you're like, "Oh, maybe I want to be an early mover on that." I think that the real competitive advantage is not maybe whether or not you're going to be an early mover, but whether or not you're going to be able to keep up with the pace of change or iterate over time. Rae Woods (12:06): Especially since we should be thinking about the problems this technology solves rather than the technology itself, going back to the very first thing that you said. I'm overwhelmingly hearing that there is a competitive advantage if you're going to move fast, but let's go to the other side of the coin. AI isn't a guaranteed slam dunk. I have to believe that there are some risks here and certainly leaders know those risks, otherwise they would've made a move eight, ten months ago as opposed to thinking about it in the next six months, next year, next two years. Elysia Culver (12:39): Yeah. I think one of the biggest risks that you might experience as an early adopter here is this risk of monetary loss. I think a lot of chief financial officers are still concerned about spending money on these new solutions, when you might not even see an ROI on that investment until years in the future. I feel like also there's this risk, especially on the monetary side, for providers who already have low profit margins. Rae Woods (13:10): Oh, yeah. Elysia Culver (13:10): Like, "Why would I want to invest in this?" Rae Woods (13:13): [Inaudible 00:13:12]. Let's name that. Elysia Culver (13:14): Yeah. If you already have low profit margins, you might not be able to even have the resources to invest in this. I think that's the biggest thing there. Rae Woods (13:26): It's not so easy as saying academically, "Everybody should be moving quickly," because there are nuances. There is risk. In fact, I am betting that not everyone can be a first mover, to your point about margin. I'm also betting that not everyone should try to move as quickly as possible at this point. Is that right? Ty Aderhold (13:46): 100%. I think a great example of this is, if you don't have a data scientist at your organization, it's going to be really hard to move quickly on a lot of these tools if you are still doing your due diligence. Sure, you could move quickly without fully understanding what you're adopting, but if you don't have that technical expertise as you evaluate these tools, it's going to be hard to move quickly here. You're going to have to wait for a lot more validation. You're going to have to talk to more third parties to make sure that what you're investing in is going to work for your patient population, it's going to be safe, it's going to not cause problems in the future, and is going to provide an ROI. Rae Woods (15:19): Are there really only two options here? Are the options on the table for our listeners, for real health leaders, really just move quickly ... perhaps they have moved already ... or wait and see, like we've done with every other technology? Ty Aderhold (15:35): I think there is a third path that you can take here. I think that third path involves waiting on some big investments, but not twiddling your thumbs in the meantime. I think there's steps you can take along the way to prepare to invest, that you can be taking right now, that will set your organization up for success. Elysia and I have been working through what we call this third path. I think we've landed on this idea of incremental movement being the ideal here. Rae Woods (16:11): Practically speaking, what would it mean to be an incremental mover? Elysia Culver (16:17): I think with an incremental mover, the biggest thing is that you don't see this end goal as being adoption. You're seeing the end goal as the steps you are taking to move forward, and that could be a variety of things. Maybe that's, "I want to figure out who is going to be involved in all of these decisions, and the skills that are going to be required to adapt in the future." It might be thinking, like we mentioned already, "What are the strategic goals that I want to be solving with AI, so I'm not jumping too far ahead? Are there some low-risk, high-impact use cases I can start with?" (16:56): I think even more importantly, it's like, "Oh, maybe I should actually set up the governance structure that I need to have better decision-making around all of this, before I just dive in headfirst into some of these things," because you might need to address new challenges that weren't present with previous healthcare technologies. I think it's those steps, those I guess end goals that you should be striving for as an incremental mover, not going immediately towards, "Let me invest in this solution right now." Ty Aderhold (17:30): It's almost the, "Eat your vegetables," process. You've got to do the governance, you've got to do your data cleaning and data architecture, the things that aren't going to be that fun, that are hard to do, but will set you up for success. Rae Woods (17:48): I guess what you're telling us is that some might be able to do this first-mover advantage. No one should simply wait and see what happens. All right, everybody's got to eat their vegetables. Everybody's going to start taking steps forward. Ty Aderhold (18:04): Right, because I think if you don't start doing that now, when you wait and see and decide two years from now, "Okay, we're ready," there's going to be so many things you didn't do along the way that slow you down at that point or lead to incorrect decision-making at that point. You need to be preparing now to be ready to make the right decisions when it makes sense for your organization to move based on your strategic goals, based on the decision-making processes that you've set up in the meantime. Rae Woods (18:34): If everybody needs to move into this third group of incremental mover, let's give them some help in what that means. I think we've already said two things, which is to recognize and internalize that you shouldn't be thinking about AI as a strategy. It should solve problems. You've already mentioned governance, which I think, Ty, you may have at one point told me is the cauliflower ... not just the vegetable, the cauliflower ... of what we need to actually do here. What else do organizations need to do now as they move into this third group, the incremental mover? Ty Aderhold (19:11): Yeah, Rae, we have a report coming out that covers this. We quickly talk about really all of these things. I'm going to summarize one piece that we haven't touched on yet, which is not being afraid to make tough decisions and move forward here. I think one big pitfall that can happen is you see headlines around issues of data bias, or you recognize that there is some pretty risky pieces to adoption here, and fully stop and decide, "You know what, no, actually we should just wait and see." I don't think that's the right approach either. (19:52): I think instead it's to actively do the hard work now, so that when you are adopting a year from now, you've taken those incremental steps, that not only set yourself up to have the adoption align with your strategy and that you have a governance in place, but you also can evaluate a tool to understand if, when you use it on your population, there is going to be bias or not, and effectively monitor that tool for bias once you've actually implemented it. I think there's going to be challenges and there's going to be scary aspects of this. It's not just stopping when you hit those, but instead, having your organization learn how to work through them and mitigate against them. Rae Woods (20:41): That is probably the hardest thing that folks are going to have to do, right? We've given them this door number three, but door number three is not the easy path, right? I do fear that a lot of folks are going to have to overcome this expectation that the problems will be solved for them, particularly that the problems will be solved for them by legislation, by the government. I should mention that as we are recording this podcast, 26 hours ago, President Biden released an executive order on AI, and I worry that folks are going to see that and go, "Ah, a reason to hold back, a reason to move into this wait-and-see approach, because see, finally we're going to see some action by the federal government," but you are saying, "No, go to the challenges yourself." Why is that something we still believe, even as, like you said, new policies, new executive orders, are coming out? Elysia Culver (21:40): Well, I think for me, it all comes down to remembering that healthcare is already complicated to regulate and to develop legislation for. Just because AI is new doesn't mean that we're going to have magical new rules that tell us what to do. I feel like that you're going to have to take it on yourself to develop those internal governance structures and work together, because like I said, it's already hard to regulate, and too, it's probably going to take a long time for these new governmental regulations to take place. That gives you an advantage too, if you're starting early and making decisions to be involved in those discussions early on and to have an opinion in those discussions, so that when top-down legislation actually does come to the table, you've had input into that. Rae Woods (22:31): Yes, especially because we all know that the path between executive order and policy is a long one. Elysia Culver (22:37): Yes. Rae Woods (22:40): If I'm honest, and if I'm reflecting on the conversation, I want to come back to what I'm seeing in the general public, in the media, and most of the headlines that I see are about the transformative potential of AI, the huge opportunity that this has in the world and in particular in healthcare. On this very podcast, we've talked about moving away from the idea of AI as an existential threat to one where we do need to think about the transformative potential, the existential opportunities. When it comes to those opportunities, what is going to be the difference between an organization that succeeds and an organization that fails? Ty Aderhold (23:23): Rae, I think the biggest difference is that organizations that succeed will realize that the transformational potential of AI is about the impact it will have on their business in the future, not about how it will impact their business when they adopt. What I mean by that is, yes, there is a transformational potential here, but it's not going to come in and just transform your organization all at once when you choose to adopt it. There are so many small steps you're going to have to take along the way to reach that transformational point. If you are sitting around and waiting for that transformation, you're never going to get there. You have to take the small steps now in order to achieve that transformational potential in the future. Elysia Culver (24:15): Yeah, I think my answer is similar. I think that if we're going to look at AI as this revolutionary thing for healthcare organizations, we're going to have to really think about what the characteristics of their organization are and what that will actually mean for them. Then also, with this pace of change, I alluded to this before, but it's going to be more about how you're iterating over time, how you're tracking the changes that AI will have on your workforce, how you're monitoring these risks over time. I think that's what is going to distinguish successful organizations in the future, not just like, "How am I implementing this successfully," but, "How am I going to continuously evolve this over time? Rae Woods (25:02): Well, Ty, Elysia, thank you for making time in your busy schedules, as you continue to get thousands of questions about AI, to come on Radio Advisory. Ty Aderhold (25:13): Always happy to make time. Elysia Culver (25:15): Thanks so much for having us. Rae Woods (25:21): It is all too easy to think that there are only two options when it comes to adopting technology or adopting artificial intelligence, "I either need to move now, or I need to wait for someone else to figure it out," but I hope you heard in this episode that there is a third option, and in fact, this is the one that we want everyone to do. That's because every organization can take meaningful steps and make progress towards advancing in artificial intelligence, even if those steps look different, organization to organization. Remember, as always, we are here to help. (26:03): If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts, and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Abby Burns, Kristin Meyers and Atticus Raasch. The episode was edited by Katy Anderson, with technical support by Dan Tayag, Chris Phelps and Joe Shrum. Additional support was provided by Carson Sisk, Leanne Elston and Erin Collins. Thanks for listening.