Rae Woods (00:02): From Advisory Board, we are bringing you a Radio Advisory, your weekly download on how to untangle healthcare's most pressing challenges. My name is Rachel Woods. You can call me Rae. Today generative AI is creating a ton of enthusiasm, but I also know that it's raising a lot of questions. After all, our industry has been adapting and attempting to keep up with a rate of change that frankly is almost unbelievable. (00:30): If I think about what I'm hearing from provider organizations specifically, they often feel like they're just left to develop AI processes and deployment strategies on their own. They're trying, but a lot of organizations are still feeling like they're struggling with what the appropriate and what the ethical implementation of AI actually looks like. It might feel like all of us are starting this process now, but what if I were to tell you that there's an organization that's been doing this for more than five years? (01:02): Today, I want to tell you that story. I want to tell you the story of Duke University Health System, and to do that, I've brought their Chief Information Officer, Dr. Eric Poon, to discuss how Duke has approached ethical AI implementation and the importance of the governance structure in making this work for everyone. Dr. Poon, welcome to Radio Advisory. Dr. Eric Poon (01:26): Thank you for having me. Rae Woods (01:34): I always find it helpful when you tell a story, you got to start that story at the beginning. My first question for you is when did Duke actually start this journey of developing the right governance structure to make decisions about AI? I'd also love to know your role in that, but take me back to the very beginning. Dr. Eric Poon (01:56): I think the journey started about five years ago when there was this recognition that yes, this AI technology that uses data that is being recorded every single minute in our electronic health record could really be used to help us be smarter clinicians and run our health system at a more optimal state. We can use it to predict which patients would be at risk of doing poorly so that we can intervene on them, where resources could be reshuffled so that we can increase our overall efficiency. In that context, there were beginning to be more and more algorithms that our internal innovators and some of the external vendors were beginning to offer to us. Rae Woods (02:48): Even though we're talking about 2019? Dr. Eric Poon (02:52): Oh, yes. Actually way before the pandemic, '17 and '18, and we were seeing us taking off and we recognize that as powerful as that technology could be, we've had a lot of experience using technology to influence decision-making. Back then, it's called clinical decision support. It is still called clinical decision support even though some of the data scientists don't really draw the connection. Rae Woods (03:19): Some physicians don't either. Dr. Eric Poon (03:24): Yeah. Well, and I think the phenomenon we were encountering was that everybody who saw promise of this technology to improve healthcare delivery in some way were strong advocates and they could close their eyes and imagine that technology making a positive impact in some way. That's great. We want that creative environment. I think what we were encountering on the ground was that a lot of these proponents of the technology really weren't thinking about all the different ways that technology like that could fail. Rae Woods (04:05): Which is I think where you come in. Dr. Eric Poon (04:05): I think we really brought in our experience in implementing run-of-the-mill clinical decision support. We recognize that, A, you really need clinicians who receive that advice from this new piece of technology to be mentally ready to receive that advice. Not only do they have to be educated, they need to trust that the advice is correct, that the advice needs to be surfaced up at the right moment when they need it, when they're most likely to be receptive to that piece of advice. (04:41): In the old world of clinical decision support, we call it the five rights of decision support. We really thought that a lot of these advocates for machine learning and predictive modeling really weren't thinking about some of these bread and butter issues, so we really thought that it was going to be important for us to get our arms around the process and create a discussion forum so that folks had a way to learn about these issues. That's how we got started. Rae Woods (05:14): That kind of process is exactly the conversation I want to have with you today. I just want to name how... I'm actually going to use the word unique this was for this period in time. If I take myself back to 2017, 2018, even though to your point, this is when AI was still happening, we had machine learning, you had certain examples like in clinical decision support. I'll tell you every conversation I had was, "Yeah, it's not a matter of if we're going to get to a point where we're using AI, it's when." (05:45): When I say when, I mean everybody wanted to punt into the future, and that is not what Duke did. I'm hearing that that's in part because there were strong advocates, there were clearly strong use cases, and there was at least one voice, and I'm talking about your voice to say, "All right, we've got to get our arms around the process in order to do that right." That was rare then. I got to say, that's also rare now. My conversations today, everyone's focused on the thing, they're focused on the product. (06:14): What thing am I going to purchase from some vendor? But you started and still are relentlessly focused on the process. Why is that? Why is governance the right first step? Dr. Eric Poon (06:27): I think to answer that question. I'll take us back to what we did in response to the coming tsunami that we were seeing. [inaudible 00:06:36] I wonder whether anybody else had figured it out. We actually did a project with a bunch of master students to understand what everybody else in the country was doing. We interviewed, it was not in the day of the podcast, but we did interviews of many leaders across academic medical centers to understand what they were doing. (07:00): In a nutshell, what we found was that everybody had similar challenges, but nobody had figured out a good process. Rae Woods (07:07): So you're thinking, "I can't steal someone else's process." Dr. Eric Poon (07:10): Yes, yes, yes. Can we stand on the shoulders of giants? There were some pockets of excellence that we certainly did learn from, but we recognize that we really need to have a way of putting a governance process together. That was the beginning of our first attempt at putting together a governance process, which in retrospect ended up being more advisory than authoritative. We learned a lot from that round of creating a governance process because we built a lot of relationships, we learned a lot in terms of what it would take to provide advice for folks who were keen to put that technology in place. (07:51): Fast-forward as this was going on, the tsunami was actually coming closer to the coast and we recognized that we really need to formalize it even further. At that point, we were fortunate enough to be able to partner with Vice Dean of Data Science, Dr. Michael Pencina over at the School of Medicine and got sponsorship from the Dean. Rae Woods (08:15): More advocates, more people involved. Dr. Eric Poon (08:19): The Dean from the Chief Policy Officer and our Chief Digital Officer because we all recognized this tsunami was coming and we needed to, A, make sure that if we deploy this technology into the health system, it is trusted and that it won't get ignored because the last thing we wanted was there to be a bad apple that exposed a lot. Meaning that if our clinicians start distrusting one form of data science or AI, they might distrust all of it. Rae Woods (08:54): Which is a story that by the way, our listeners know all too well, at least perhaps not necessarily in the case of AI. They're certainly fearful of that now, but let's say they've been burned by technology before. I appreciate that you're saying that even though you did this big effort in trying to learn from the glimmers of hope from other players, you're starting from scratch, that the version of this governance process that you put together five plus years ago isn't necessarily the same as the one you have today. (09:23): You've tweaked it, you've changed it, you've improved it. I am acutely aware of the fact that I'm going to ask you to tell me how a governance structure process works on a podcast. We don't want to lose our listeners in this, but I wonder if you can walk me through an example that shows the process of deciding, "All right, we want to do something to actually implementing a new technology with these clinicians so that they embrace it and aren't going to get burned by it." What does that process look like? Dr. Eric Poon (09:55): It's undergone a couple rounds of evolution from when we started. To cut to the chase, what it looks like today is that we really think of the implementation of machine learning and data science tools as a journey, and we divide up that journey into distinct parts. What we found when we started, that we were just providing advice at the beginning and we were tapping way too many questions and issues. (10:30): By breaking up the journey into different parts, we were able to focus the discussions on things that really needed to be resolved at that point in time in the project rather than thinking about the entire process from beginning to end. Rae Woods (10:47): What are the different parts of the process? What are the different breakdown points? Dr. Eric Poon (10:52): Dr. Pencina had a lot to do with chunking the journey out, and I think if you think about these tools as medical devices, there are paradigms we could borrow from. We actually did borrow quite a bit from how the FDA thinks about medical devices. Rae Woods (11:07): Interesting. Dr. Eric Poon (11:07): What we ended up thinking about is that folks develop these predictive models using data. That's the development part of the journey. Now, once they have developed the tool and then they see promise, then what we tell them to do is that, "Okay, this looks promising, but you need to hook up the tool with real life data." Rae Woods (11:29): To see if it's working, to evaluate it. Dr. Eric Poon (11:31): See if it's working the same way it did while we were developing it. We call it the silent evaluation phase. That was important because a lot of the tools were developed using retrospective data, data that was collected a long time ago versus data that was actively coming in as care was taking place, the way the tool was meant to be used. We really felt it was important to do the sign and evaluation to see if this tool would work in a reasonable manner when it is subject to the necessities of real time, real life data. (12:13): Once the project team says, "You know what? We've hooked up the tool to real life streams of data and it still seems to be performing well," then we tell them, "Okay, well it's time to try it, but let's try it out in a way so that we can measure in some way whether the tool is going to be, A, is it going to be used? And B, is it going to improve care in some way?" We specify what it will improve, but we ask the project team to think about how they're going to measure the improvement. Rae Woods (12:46): There needs to be a positive outcome. That's the goal of that. Dr. Eric Poon (12:51): There needs to be positive outcome, yeah. Show me the data. The next phase after the sound evaluation phase is what we call a clinical effectiveness evaluation phase. Of course, in order for that to happen, they need to really have thought about the integration, the five rights of decision support, how to make the technology work. That's where a lot of the technical integration discussions need to have happened before they go on to the clinical effectiveness phase. (13:19): When all that is ready and they go do a trial and they collect some data and show that yes, it is actually working, then they can come back to us and say, "Okay, let's think about scaling it up." Rae Woods (13:29): This is ready to deploy. Dr. Eric Poon (13:31): Ready to... General deployment. In the general deployment phase, we also ask them, "Let's make sure that you have a way of periodically monitoring the tool so that yes, it was working during the clinical effectiveness phase, but how do we know that folks continue to use it? How do you know that data streams aren't getting changed because of changes in workflow or changes in the ways that data are collected? So let's have a monitoring plan." (13:59): We really broke up the journey into these floor chunks, development, silent evaluation, clinical effectiveness evaluation, and general deployment. We had conversations right before, we call it just in time, and we borrowed the term from our pediatric colleagues called anticipatory guidance. We talked to them about what you absolutely need to be ready for the next phase, but we also talk to them about what you might need in the future. Rae Woods (15:26): I really appreciate this stepwise breaking down the journey process. I do want to channel what I think might be some pushback from our listeners and the fear that I think everyone has is how do you balance planning with taking action? Let's be honest, Eric, our industry is not well-known for moving quickly. I think probably outside of COVID, we often move extremely slowly. (15:53): I'm still hearing folks measure tech adoption, especially in the case of something like the EMR. They literally measure it in decades. I'd love for you to name for our listeners, why is it important to move through this measured four step process? What challenges do you avoid? I'd love for you to name for me how you actually are still moving quickly and we're not going into this measuring progress in decades. Dr. Eric Poon (16:22): These are conversations we have all the time. I think by providing guidance at distinct points in the time, we become much more collaborative as a governance process with the project teams. We've done a lot of work to make sure that we are responsive in terms of quick turnaround times. When we started each checkpoint between the different phases of the journey required an in-person meeting, which is of course difficult to schedule when you have different disciplines that need to come together with the project team. (16:54): But now we made it ourselves much more efficient. We have borrowed what the Institutional Review Board, which is the entity that reviews research protocols, we bought... IRB is the acronym for Institutional Review Board. We have moved that review process mostly offline so that folks submit some materials and oftentimes is basically a PowerPoint deck. Then we give them a cheat sheet of these are the things that we would like to look at. (17:23): We provide a lot of transparency so that folks know what materials to put together in a PowerPoint deck. We don't need War and Peace. We just want just enough material. Then the material gets submitted and the project team typically can look at the material and we have a small team about the size of three to four members looking at each of those projects fairly quickly over a couple of weeks and provide advice and say that a couple of things you really have to address and fix, and then a couple of things you might want to think about. Rae Woods (17:58): I'm almost hearing you describe checks and balances, which again, if you come back to where you started, one of the things that keeps ringing in my head that you said is we have to make sure that we're developing processes that work not just for patients, but that work for the clinicians. We can't let them get burned. And so, I appreciate that you're almost saying you have to have these things in place. What is the saying? You got to move slow to move fast, measure twice, cut once. Those are the things that are going through my head as you're describing this. Dr. Eric Poon (18:28): Yeah. I'll be honest with you. Some people say, "Wow, that is a lot of things that we need to pull together." I think that's the push and pull one, but we go back to tell them that, "But we want your project to be successful." Then we tell them that "We've been at this for decades. Yeah, maybe AI and machine learning is the shiny object on the block, but we've been working on clinical decision support for a long time." (18:55): Like it or not, many things that we have looked at in the domain of clinical decision support have not panned out in the past because of a host of so many things. Let's learn from the past and make sure that we are actually improving care in some measurable way. Rae Woods (19:14): By the way, this is Advisory Board's guidance too. We're saying you need to focus on the process, the governance follow the Duke model is what we would say rather than focusing on the product, the shiny object, which I totally agree. We are in that phase where that's where everyone's focused. I'll say they're not just focused on the shiny object. I hear a lot of folks wanting, wishing for the low hanging fruit example. (19:41): What strikes me as interesting in the case of Duke is not only did you start with governance, but when it came to use cases, you actually had this very interesting mix of examples. An operational use case, something that is much more clinical, something that matches your research arm as an academic institution. Why were those the right places for you to start? Dr. Eric Poon (20:05): It's actually an interesting question because my flippant answer is those are the problems that our clinical leaders wanted to solve. Rae Woods (20:12): Wait, wait, wait, wait. I actually don't think that's a flippant answer at all. I think that that is the perfect answer because one of the things that we keep saying is solve actual problems for your actual institution, and so don't get distracted by shiny objects. Also, don't get distracted by what other people are saying they're doing. Everybody's saying, "We need to reduce administrative burden," but you're saying "Actually our three areas that we started with were what our doctor said that they wanted to start with." Dr. Eric Poon (20:40): I think having the right champions is so critical and having champions who have the right insight as to the problems were and how these tools could help. I think that is worth its weight in gold because these are the folks who are on the front lines seeing the problems and identifying the opportunities. We've had for a long time, similar to other academic medical centers, a lot of folks who are in healthcare to think about not just to take care of each individual patients, but how to make our approaches, our systems better. (21:16): So yes, we had no shortage of ideas coming, and that was part of the tsunami that I was seeing that folks really wanted to come to the fore. Today, to give you a sense of scale, this process right now, the portfolio of tools that we are putting through this four stage process, it's a number in the 70s. Rae Woods (21:37): Oh, wow. Okay. I want to do something a little bit different in this next part of our conversation, and I almost want to ask you a bit of rapid-fire questions- Dr. Eric Poon (21:45): Sure. Rae Woods (21:46): ... if you'll go with me on this journey here, because I'm thinking about some of the biggest questions I hear in the market and I want to get your take on them. First rapid-fire question, is there even such thing as low hanging fruit when it comes to generative AI? Dr. Eric Poon (22:03): Perhaps. The one thing that is readily available that is close to free or low cost is some form of a chatbot. You and I can go to OpenAI. There are versions that are suitable for use in healthcare, which I won't go into, but I think that technology can be used to shave off a few minutes here and there on different administrative tasks. For example, I tell my team in technology, "Don't you dare think about writing a new job description. Go to ChatGPT to use the starting point." (22:42): Oh, [inaudible 00:22:42], I have a better example. We were putting together new Governance Committee to talk about generative AI, and guess what I used to write the charter? ChatGPT. I tell you this is more than a year ago, and that draft charter was so good I barely had to change anything beyond the name of the committee. Rae Woods (23:10): Next question, what is truly different about generative AI versus what we can take as general lessons from previous tech implementation? Dr. Eric Poon (23:21): Great question. I think what's different about generative AI is that the output of these amazing tools is human language, and that opens up so many things beyond writing project charters and job descriptions. Rae Woods (23:37): I was going to say to your previous example. Dr. Eric Poon (23:40): Yeah, it is now being used to generate clinical notes by using recordings of conversations between the provider and the patient. It is capable of taking a recording of a meeting and generating minutes. It is pretty darn amazing what it can do. These are two low hanging fruits on the table as well. What's also different because it is generating language, is that we can never be sure that the output of these tools is 100%. Rae Woods (24:18): Which actually brings me to my last rapid-fire, which is I know that you've been quite focused on making sure that there is a human element embedded in the entirety of the process and the output of Duke's investment in AI. What is human-centered ai? What does that mean? Dr. Eric Poon (24:42): Well, I think it means quite a few things. It means that you think about the end user and the person who would benefit from this technology right upfront. You really need to step into the minds of the user and think about the five rights decision support upfront. That's one piece. We also think about what the person needs, what are some of the ways that person could be distracted or misguided by the technology? (25:12): We really need to think about the fact that you have a human being at the receiving end of this advice or outputs and make sure that they are the right support systems, not just for the technology, but for the human being on that end so that we have a high degree of confidence that the human being on the other end of the technology will use the technology in a way that's responsible and effective. Rae Woods (25:39): You and Duke, the organization, have been on this journey for some time and that is very different from the place in which our listeners are coming from. What advice do you have for those listening to this podcast that are really at the beginning of their journey and are feeling a lot of pressure to act quickly? Dr. Eric Poon (25:57): Yes. First of all, I'll say that we feel that pressure too because of the last- Rae Woods (26:01): Oh, interesting. Dr. Eric Poon (26:02): [inaudible 00:26:03]. Generative AI has also accelerated and heightened everybody's interest in AI, so we feel we see you. That aside, I would say that I think it will be important to come to some decisions quickly at a high level as to which use case do you want to pursue, and then think about what are the key principles. For us, it was making sure that the technology is safe and effective and equitable, meaning that it won't exacerbate inequities that we know exist in healthcare. Rae Woods (26:37): Yes. Dr. Eric Poon (26:38): Then for those use cases, ask some right questions both at the beginning of the project, but have them come back to you to demonstrate that it is actually making things better. Don't try to fix everything upfront, which tends to be the pattern that I've seen with traditional IT governance. Rae Woods (26:59): Yes. Dr. Eric Poon (26:59): Put together a sophisticated business case and let them go ahead and do the project. I think because this technology is so new, it is really important to you fail forward and fail fast, to borrow some buzzwords here, try things out, but measure along the way, and if it doesn't work, say "Yes, it's good that we know that now, and let's move on and try something else." I think that would be my advice. Don't let perfection be the enemy of the good. I know that's trite. Rae Woods (27:31): But it's real. Dr. Eric Poon (27:32): Yeah, it's real. Measure some things, but you don't have to make it a research project. It doesn't have to be publication quality, but you need to at least gather some data to convince yourselves this is improving healthcare and it's worth investing in an ongoing way. That discipline is important to bake into decision-making and discussions. Rae Woods (27:55): My last question for you is what's next? What's next for your role and what's next for Duke when it comes to ethical human-centered AI adoption? Dr. Eric Poon (28:05): Great question. One thing that we are doing is that we are adapting our governance process to take into account these tools that are coming courtesy of generative AI because these tools do behave differently than traditional AI, which are mostly used to predict things in the future. These tools interact with human beings and generate human-like language. We need to think about how to guide the project teams in a different way. (28:31): We're adapting our four stage development process to make sure that these tools are used safely and effectively. We are also looking at AI tools that are used in the imaging diagnostic process. We are also sharing our learning, so this is a great opportunity to do that. We are also partnering with other organizations such as the Coalition for Health AI, CHAI, to share our governance processes so that folks don't have to start from scratch. Rae Woods (29:01): I love it. Constantly tweaking new use cases and trying to get the word out that this is really the place to start. I just want to thank you for telling this story and sharing this case study with Radio Advisory. Dr. Eric Poon (29:12): Oh, thanks for having me. Rae Woods (29:16): This is a really important case study for you to understand all the details of, and I am positive that you're not going to be able to get everything from just this podcast episode, so I want to make sure you go to the show notes because we've actually put in a pretty detailed case study of Duke's three-pronged approach to ethical and equitable AI integration. You'll hear more about all of the steps that Dr. Poon shared with us today, and it'll help you not have to start from scratch. Because remember, as always, we are here to help. (30:08): If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Abby Burns, Kristin Myers, and Atticus Raasch. (30:25): The episode was edited by Katy Anderson, with technical support provided by Dan Tayag, Chris Phelps and Joe Shrum. Additional support was provided by Carson Sisk, Leanne Elston, and Erin Collins. We'll see you next week.